-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cache #2558
base: next
Are you sure you want to change the base?
cache #2558
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Skipped Deployment
|
Warning Rate limit exceeded@ponderingdemocritus has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 25 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughThis pull request introduces comprehensive updates to the Eternum client's data management and caching infrastructure. The changes primarily focus on integrating IndexedDB functionality across the application, adding a new Changes
Possibly related PRs
Suggested Reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
You are out of MentatBot reviews. Your usage will refresh December 23 at 08:00 AM. |
Failed to generate code suggestions for PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (9)
client/package.json (1)
22-29
: Consider documenting alpha dependency requirementsSince the project relies on specific alpha versions of @dojoengine packages for the caching implementation, consider:
- Adding a note in the README about the alpha dependency requirements
- Documenting any known limitations or issues
- Creating an upgrade plan for when stable versions become available
client/src/dojo/indexedDB.ts (2)
8-31
: Consider adding defensive code or a fallback mechanism for database version conflicts.
If an older version of the database or a user with a stale browser tries to open the database, a version conflict could occur. While this implementation is acceptable, explicitly handling such scenarios (e.g., bridging from older versions to newer versions via migrations) can help prevent unexpected upgrade failures.
99-106
: Avoid forcing a page reload after cache deletion.
Reloading helps ensure a clean state, but it might feel disruptive to the user experience. You could consider a more graceful approach to clearing the UI state without a reload—e.g., re-initializing your data or re-running your subscription logic. That way, users can continue their workflow right away.client/src/dojo/queries.ts (4)
11-11
: Use distinct transaction keys for separate subscription calls.
Right now, the subscription call for the two-key model uses the cache key'entity_two_key_${entityID}_query'
. If many entity IDs are given, this might result in collisions or ambiguous logs. You could more thoroughly separate them, for example by joining them with a delimiter or hashing the entity list.
42-42
: Document possible concurrency issues.
MultipleaddToSubscriptionOneKeyModelbyRealmEntityId
calls could run concurrently inserting or fetching from IndexedDB. While IndexedDB handles concurrency, documenting or illustrating your concurrency approach will help future maintainers.
64-65
: Reuse or refactor repeated patterns.
The parameter structure—likedbConnection: db, timestampCacheKey: ...
—is repeated across multiple subscription functions. Standardizing or reusing a helper function can reduce duplication and ensure uniform logging, caching, and error-handling patterns.
133-133
: Investigate time-based and event-based caching strategies.
Setting atimestampCacheKey
is a good start. Consider an event-based or more advanced time-based invalidation scheme to align with data volatility in your app. This might help ensure your store stays fresh without incurring an abundance of queries.client/src/ui/components/resources/InventoryResources.tsx (1)
54-59
: Cache invalidation approach may be overly simplistic.
Currently, you store a timestamp and skip the subscription if the data is still considered “fresh.” While this optimizes performance, it might result in stale data if an entity’s resources change quickly. Consider adding an event-based or partial invalidation approach for critical real-time features.client/src/dojo/setup.ts (1)
Line range hint
122-161
: Consider extracting configuration valuesThe timeout values and cache keys are hardcoded. Consider moving these to a configuration object for better maintainability.
+ const CONFIG = { + QUERY_TIMEOUT: 40_000, + CACHE_KEYS: { + CONFIG: 'config_query', + SINGLE_KEYED: 'single_keyed_query', + DOUBLE_KEYED: 'double_keyed_query' + } + }; await getEntities( network.toriiClient, { Composite: { operator: "Or", clauses: configClauses } }, network.contractComponents as any, [], [], - 40_000, + CONFIG.QUERY_TIMEOUT, false, - { dbConnection: indexedDB, timestampCacheKey: "config_query" }, + { dbConnection: indexedDB, timestampCacheKey: CONFIG.CACHE_KEYS.CONFIG }, );
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (8)
client/package.json
(1 hunks)client/src/dojo/indexedDB.ts
(1 hunks)client/src/dojo/queries.ts
(6 hunks)client/src/dojo/setup.ts
(5 hunks)client/src/ui/components/resources/InventoryResources.tsx
(1 hunks)client/src/ui/components/trading/ResourceArrivals.tsx
(1 hunks)client/src/ui/layouts/World.tsx
(1 hunks)client/src/ui/modules/settings/Settings.tsx
(3 hunks)
🔇 Additional comments (6)
client/package.json (1)
22-29
: Verify changelog and compatibility for @dojoengine packages
The update from 1.0.4-alpha.3.1.0
to 1.0.4-alpha.3.1.1
across all @dojoengine packages should be verified for breaking changes, especially given their alpha status and the new IndexedDB integration.
client/src/dojo/indexedDB.ts (1)
33-61
: Validate entity IDs and data before storing.
When fetching data from the store, ensure that the data structure expected by your application aligns with what’s actually in request.result
. A quick validation or type-check can protect against malformed data in the "entities" store (e.g., if an entry's id
property is missing or not fitting the expected format).
client/src/ui/components/trading/ResourceArrivals.tsx (1)
32-37
: Ensure consistent error handling when database connection is unavailable.
By adding the dojo.setup.db
parameter, there’s a new dependency on a valid IDBDatabase
. If dojo.setup.db
is null
or otherwise undefined, the function call could fail. A fallback or a guard that checks dojo.setup.db
before calling addToSubscription
will improve resilience.
client/src/dojo/queries.ts (3)
33-34
: Check for empty arrays before constructing clauses.
When entityID
is empty, this may create an empty set of clauses. Handle this edge case gracefully (e.g., skip the query or log a message) to avoid unnecessary queries.
73-73
: Check for user confusion with optional 'position' parameter.
Here you expand the subscription to handle position
as well. Ensure that calling code is aware that position
changes the query keys. You might consider splitting this out to a separate function or clarifying how it works with an in-code comment.
116-116
: Provide fallback logic for db
if needed.
The db
argument is generally required. If this function ends up being called in a context without a valid IndexedDB instance, you might want to fail gracefully or skip local caching.
async function insertEntitiesInDB(db: IDBDatabase, entities: Entities): Promise<void> { | ||
return new Promise((resolve, reject) => { | ||
const transaction = db.transaction(["entities"], "readwrite"); | ||
const store = transaction.objectStore("entities"); | ||
|
||
let error: Error | null = null; | ||
|
||
// Handle transaction completion | ||
transaction.oncomplete = () => { | ||
if (error) { | ||
reject(error); | ||
} else { | ||
resolve(); | ||
} | ||
}; | ||
|
||
transaction.onerror = () => { | ||
reject(transaction.error); | ||
}; | ||
|
||
// Store each entity | ||
for (const [entityId, data] of Object.entries(entities)) { | ||
const entityData = { | ||
id: entityId, | ||
...data, | ||
}; | ||
|
||
const request = store.put(entityData); | ||
|
||
request.onerror = () => { | ||
error = request.error; | ||
}; | ||
} | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consolidate transaction-level error handling.
Here, you track write errors per entity but also rely on the top-level transaction error event. In some scenarios, partial success with multiple items can be tricky. Consider capturing partial success more explicitly (e.g., storing or returning the IDs that fail to persist). This can provide better feedback in case a subset of entities fails to be inserted.
false, | ||
{ dbConnection: db, timestampCacheKey: `entity_${entityID}_query` }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure stable or unique cache keys for multiple entities.
When an array of entities is passed, the interpolation 'entity_${entityID}_query'
might produce unexpected collisions if entityID
is itself an array. Consider a more robust approach, such as JSON-stringifying the array or hashing it.
const indexedDB = await openDatabase(); | ||
await syncEntitiesFromStorage(indexedDB, network.contractComponents as any); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for database operations
The database initialization lacks error handling which could lead to silent failures. Consider wrapping the operations in try-catch blocks and implementing a cleanup mechanism.
- const indexedDB = await openDatabase();
- await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
+ let indexedDB;
+ try {
+ indexedDB = await openDatabase();
+ await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
+ } catch (error) {
+ console.error('Failed to initialize database:', error);
+ // Implement fallback mechanism or user notification
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const indexedDB = await openDatabase(); | |
await syncEntitiesFromStorage(indexedDB, network.contractComponents as any); | |
let indexedDB; | |
try { | |
indexedDB = await openDatabase(); | |
await syncEntitiesFromStorage(indexedDB, network.contractComponents as any); | |
} catch (error) { | |
console.error('Failed to initialize database:', error); | |
// Implement fallback mechanism or user notification | |
} |
eventSync, | ||
db: indexedDB, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve type safety and encapsulation of the database instance
Directly exposing the database instance could lead to misuse. Consider creating a typed interface for database operations.
+ interface DatabaseOperations {
+ connection: IDBDatabase;
+ // Add specific database operation methods here
+ }
return {
network,
components,
systemCalls,
sync,
eventSync,
- db: indexedDB,
+ db: {
+ connection: indexedDB,
+ // Implement specific database operation methods
+ } as DatabaseOperations,
};
Committable suggestion skipped: line range outside the PR's diff.
<Button | ||
onClick={() => { | ||
clearCache(); | ||
setShowSettings(false); | ||
setBlankOverlay(true); | ||
}} | ||
> | ||
clear cache | ||
</Button> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance user experience for cache clearing operation
The cache clearing operation lacks user confirmation and feedback. Consider adding these safety measures and improving the user experience.
<Button
onClick={() => {
+ if (!window.confirm('Are you sure you want to clear the cache? This will reload the page.')) {
+ return;
+ }
+ try {
clearCache();
setShowSettings(false);
setBlankOverlay(true);
+ // Show success message before reload
+ toast.success('Cache cleared successfully');
+ } catch (error) {
+ toast.error('Failed to clear cache');
+ console.error('Cache clear failed:', error);
+ }
}}
>
- clear cache
+ Clear Cache
</Button>
Committable suggestion skipped: line range outside the PR's diff.
client/src/ui/layouts/World.tsx
Outdated
[...filteredStructures.map((structure) => structure.entity_id.toString())], | ||
dojo.setup.db, | ||
), | ||
addToSubscriptionTwoKeyModelbyRealmEntityId( | ||
dojo.network.toriiClient, | ||
dojo.network.contractComponents as any, | ||
[...filteredStructures.map((structure) => structure.entity_id.toString())], | ||
dojo.setup.db, | ||
), | ||
addToSubscription( | ||
dojo.network.toriiClient, | ||
dojo.network.contractComponents as any, | ||
[structureEntityId.toString()], | ||
dojo.setup.db, | ||
[{ x: position?.x || 0, y: position?.y || 0 }], | ||
), | ||
addToSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, [ | ||
ADMIN_BANK_ENTITY_ID.toString(), | ||
]), | ||
|
||
addToSubscription( | ||
dojo.network.toriiClient, | ||
dojo.network.contractComponents as any, | ||
[ADMIN_BANK_ENTITY_ID.toString()], | ||
dojo.setup.db, | ||
), | ||
|
||
addToSubscription( | ||
dojo.network.toriiClient, | ||
dojo.network.contractComponents as any, | ||
[...filteredStructures.map((structure) => structure.entity_id.toString())], | ||
dojo.setup.db, | ||
[...filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y }))], | ||
), | ||
addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any), | ||
addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, dojo.setup.db), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve subscription reliability and performance
The subscription logic could benefit from retry mechanisms and better error handling. Consider implementing batching for better performance.
+ const MAX_RETRIES = 3;
+ const BATCH_SIZE = 5;
+ async function withRetry(operation: () => Promise<any>, retries = MAX_RETRIES) {
+ try {
+ return await operation();
+ } catch (error) {
+ if (retries > 0) {
+ console.warn(`Retrying operation, ${retries} attempts remaining`);
+ return withRetry(operation, retries - 1);
+ }
+ throw error;
+ }
+ }
const fetch = async () => {
try {
+ // Batch subscriptions for better performance
+ const subscriptionTasks = [
+ withRetry(() => addToSubscriptionOneKeyModelbyRealmEntityId(/*...*/)),
+ withRetry(() => addToSubscriptionTwoKeyModelbyRealmEntityId(/*...*/)),
+ // ... other subscription tasks
+ ];
+
+ for (let i = 0; i < subscriptionTasks.length; i += BATCH_SIZE) {
+ await Promise.all(subscriptionTasks.slice(i, i + BATCH_SIZE));
+ }
} catch (error) {
console.error("Fetch failed", error);
+ // Implement proper error recovery
}
};
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (12)
client/src/ui/modules/navigation/TopLeftNavigation.tsx (2)
332-341
: Add ARIA attributes for accessibilityWhile the implementation is clean, consider enhancing accessibility by adding ARIA attributes.
const ProgressBar = memo(({ progress }: { progress: number }) => { return ( <div + role="progressbar" + aria-valuenow={progress} + aria-valuemin={0} + aria-valuemax={100} className="absolute bottom-0 left-0 h-1 bg-gold to-transparent rounded-bl-2xl rounded-tr-2xl mx-1" style={{ width: `${progress}%` }} ></div> ); });
Line range hint
343-407
: Consider user preferences for sound effectsThe sound effect implementation is good, but consider adding a user preference setting for audio feedback.
Consider implementing a user preference hook:
const useUserPreferences = () => { const [soundEnabled, setSoundEnabled] = useState(() => localStorage.getItem('soundEnabled') === 'true' ); const toggleSound = useCallback(() => { setSoundEnabled(prev => { const newValue = !prev; localStorage.setItem('soundEnabled', String(newValue)); return newValue; }); }, []); return { soundEnabled, toggleSound }; };Then update the sound effect logic:
useEffect(() => { if (lastProgressRef.current > progress) { - play(); + if (soundEnabled) { + play(); + } } lastProgressRef.current = progress; - }, [progress, play]); + }, [progress, play, soundEnabled]);scripts/deploy.sh (1)
Line range hint
1-71
: Consider restructuring the deployment script for better maintainabilityThe script handles multiple deployment stages but could benefit from better organization and error handling:
- Consider breaking down the script into separate modules:
scripts/ ├── deploy/ │ ├── season_pass.sh │ ├── eternum.sh │ ├── season_resources.sh │ └── slot.sh ├── utils/ │ ├── validation.sh │ └── env.sh └── deploy.sh
- Add validation for required environment variables at the start:
#!/bin/bash required_vars=( "VITE_PUBLIC_MASTER_ADDRESS" "VITE_PUBLIC_MASTER_PRIVATE_KEY" "VITE_PUBLIC_NODE_URL" "SOZO_WORLD" ) for var in "${required_vars[@]}"; do if [ -z "${!var}" ]; then echo "Error: Required environment variable $var is not set" exit 1 fi done
- Add clear documentation of prerequisites and usage:
#!/bin/bash # Eternum Deployment Script # # Prerequisites: # - Node.js 16+ # - Scarb # - Sozo # - Slot CLI # # Usage: # ./deploy.sh [--skip-season-pass] [--skip-resources] # # Environment variables: # VITE_PUBLIC_MASTER_ADDRESS - Master account address # VITE_PUBLIC_MASTER_PRIVATE_KEY - Master account private key # ...These changes would improve maintainability, reduce errors, and make the deployment process more robust.
client/src/ui/layouts/World.tsx (2)
187-204
: Optimize subscription management for filtered structuresThe current implementation makes three parallel subscription calls for each filtered structure. Consider:
- Batching the subscriptions to prevent overwhelming the system
- Implementing a queue system for large numbers of structures
- Adding proper cleanup for stale subscriptions
+ const BATCH_SIZE = 10; // Adjust based on performance testing + + async function processBatch(structures: PlayerStructure[], startIdx: number) { + const batchStructures = structures.slice(startIdx, startIdx + BATCH_SIZE); + if (batchStructures.length === 0) return; + + await Promise.all([ + addToSubscriptionOneKeyModelbyRealmEntityId( + dojo.network.toriiClient, + dojo.network.contractComponents as any, + batchStructures.map(s => s.entity_id.toString()), + dojo.setup.db, + ), + addToSubscriptionTwoKeyModelbyRealmEntityId( + dojo.network.toriiClient, + dojo.network.contractComponents as any, + batchStructures.map(s => s.entity_id.toString()), + dojo.setup.db, + ), + addToSubscription( + dojo.network.toriiClient, + dojo.network.contractComponents as any, + batchStructures.map(s => s.entity_id.toString()), + dojo.setup.db, + batchStructures.map(s => ({ x: s.position.x, y: s.position.y })), + ), + ]); + } + const fetch = async () => { try { - await Promise.all([...]) + for (let i = 0; i < filteredStructures.length; i += BATCH_SIZE) { + await processBatch(filteredStructures, i); + } } catch (error) { - console.error("Fetch failed", error); + console.error("Batch processing failed:", error); + // Implement retry logic for failed batches } };
150-155
: Add TypeScript types for network and component parametersThe
contractComponents
parameter is being cast toany
in multiple places. Consider:
- Adding proper TypeScript types for network and component parameters
- Creating type definitions for the subscription functions
- Removing the
as any
casts+ type ContractComponents = { + // Add appropriate type definitions + }; + + type SubscriptionParams = { + toriiClient: any; // Replace with proper type + contractComponents: ContractComponents; + entityIds: string[]; + db: IDBDatabase; + positions?: Array<{ x: number; y: number }>; + }; - dojo.network.contractComponents as any + dojo.network.contractComponents as ContractComponentsAlso applies to: 187-192, 221-221
client/src/dojo/worker.ts (2)
14-14
: Consider fallback or error-handling logic for Worker creation.In case the Worker initialization fails (e.g., in unsupported environments or due to a loading error), it would be beneficial to handle or log this scenario gracefully.
31-49
: Consider separate approach for large event volumes.Subscribing to both onEntityUpdated and onEventMessageUpdated might result in high message traffic for large-scale events. Evaluate using more fine-grained subscription scopes or chunked updates to prevent performance bottlenecks.
client/src/dojo/entityWorker.js (3)
1-1
: Confirm tree-shakable import from lodash.Using named imports from "lodash" can inflate bundle size if not carefully configured. Depending on the build, importing only the debounce function from "lodash/debounce" might reduce overhead.
3-5
: Consider removing unused variable toriiClient or clarifying its usage.Currently, toriiClient is declared but not used in this file. If it’s needed for future logic, comment or plan for its actual usage to avoid confusion.
8-14
: Potential memory usage concern with large entityBatch.If large amounts of data are posted frequently, entityBatch can accumulate quite a bit of data before debouncedSendBatch triggers. You may consider a threshold-based flush in addition to a time-based flush, to avoid memory bloat in the worker.
client/src/dojo/setup.ts (2)
160-160
: Investigate partial sync logic for large datasets.Repeated calls to getEntities with up to 40,000 records each may cause performance issues if the dataset is large. Consider implementing chunked queries or incremental sync strategies.
Also applies to: 177-177
180-190
: Observe logging parameter for setupWorker.The current call to setupWorker does not provide a logging parameter (defaults to false). If debugging is needed, you might want to pass logging: true. Alternatively, consider controlling logging via env variables for production.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
client/.env.preview
(1 hunks)client/src/dojo/entityWorker.js
(1 hunks)client/src/dojo/setup.ts
(5 hunks)client/src/dojo/worker.ts
(1 hunks)client/src/ui/layouts/World.tsx
(2 hunks)client/src/ui/modules/navigation/TopLeftNavigation.tsx
(3 hunks)scripts/deploy.sh
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- client/.env.preview
🔇 Additional comments (8)
client/src/ui/modules/navigation/TopLeftNavigation.tsx (1)
319-319
: LGTM: Clean integration of ProgressBar component
The ProgressBar is well-positioned within the navigation selector and receives its progress prop correctly.
scripts/deploy.sh (1)
71-71
: 🛠️ Refactor suggestion
Verify deployment version and improve error handling
The slot deployment command has been updated to use version v1.0.8. While the change is straightforward, there are several improvements that could be made:
-
Let's verify if the version exists:
-
Consider these improvements:
-slot deployments create -t epic sepolia-rc-19 torii --version v1.0.8 --world 0x3dc74e8caadbde233bb750a6608e095daba2891d9784ea0fb7fbf9988948c15 --rpc https://api.cartridge.gg/x/starknet/sepolia --indexing.pending true --config ./torii-sepolia.toml
+# Read world address from environment
+WORLD_ADDRESS="${WORLD_ADDRESS:-0x3dc74e8caadbde233bb750a6608e095daba2891d9784ea0fb7fbf9988948c15}"
+
+# Validate world address format
+if [[ ! $WORLD_ADDRESS =~ ^0x[a-fA-F0-9]+$ ]]; then
+ echo "Error: Invalid world address format"
+ exit 1
+fi
+
+# Create slot deployment with error handling
+if ! slot deployments create -t epic sepolia-rc-19 torii \
+ --version v1.0.8 \
+ --world "$WORLD_ADDRESS" \
+ --rpc https://api.cartridge.gg/x/starknet/sepolia \
+ --indexing.pending true \
+ --config ./torii-sepolia.toml; then
+ echo "Error: Failed to create slot deployment"
+ exit 1
+fi
These changes would:
- Make the world address configurable via environment variable
- Add validation for the world address format
- Add proper error handling for the slot creation
- Improve readability with multi-line formatting
client/src/ui/layouts/World.tsx (1)
217-230
: 🛠️ Refactor suggestion
Add cleanup and status tracking for market subscription
The market subscription lacks cleanup logic and status tracking. Consider:
- Adding cleanup on component unmount
- Implementing subscription status tracking
- Adding retry logic for failed market subscriptions
useEffect(() => {
+ let isSubscribed = true;
+ let retryCount = 0;
+ const MAX_RETRIES = 3;
+
const fetch = async () => {
try {
await addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, dojo.setup.db);
+ if (isSubscribed) {
+ setMarketLoading(false);
+ }
} catch (error) {
console.error("Market fetch failed", error);
+ if (isSubscribed && retryCount < MAX_RETRIES) {
+ retryCount++;
+ console.log(`Retrying market subscription (${retryCount}/${MAX_RETRIES})...`);
+ setTimeout(fetch, 1000 * retryCount);
+ }
} finally {
- setMarketLoading(false);
+ if (isSubscribed) {
+ setMarketLoading(false);
+ }
}
};
fetch();
+
+ return () => {
+ isSubscribed = false;
+ // Add subscription cleanup logic here
+ };
}, [setMarketLoading]);
Likely invalid or redundant comment.
client/src/dojo/worker.ts (1)
25-29
: Ensure resilience to malformed messages.
The current onmessage handler assumes that event.data contains updatedEntities in a valid format. In production, consider validating the shape of event.data for safer usage.
client/src/dojo/entityWorker.js (1)
16-23
: Validate incoming message format.
The worker blindly assumes e.data to have the shape { type, entities, logging }. For better stability, add checks or fallback logic (e.g., ignoring unknown message types).
✅ Verification successful
Let me check how the worker is being used to better understand the message flow.
Add validation for message format and type
The worker is only used in one place (worker.ts) and always receives messages with the expected shape. However, it's still good practice to add validation:
self.onmessage = async (e) => {
if (!e.data || typeof e.data !== 'object') return;
const { type, entities, logging: logFlag } = e.data;
if (type !== "update" || !entities?.fetchedEntities || !entities?.data) return;
logging = logFlag;
if (logging) console.log("Worker: Received entities update");
handleUpdate(entities.fetchedEntities, entities.data);
};
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Checking for all usage of "postMessage" across the codebase to confirm it's always sending the expected shape:
rg -A 3 "postMessage\("
Length of output: 679
Script:
#!/bin/bash
# Check the worker initialization and usage
rg -A 5 "new Worker\("
# Check the full entityWorker.js file to understand the message handling
cat client/src/dojo/entityWorker.js
Length of output: 1269
client/src/dojo/setup.ts (3)
117-118
: **** The same feedback about implementing try-catch for database operations was provided previously.
229-229
: Refine typed interface for database usage.
Exposing the raw indexedDB object can lead to misuse. Consider returning a structured interface or utility methods for the calling code.
11-11
:
Add error handling for database operations.
In case openDatabase fails or is unavailable in the environment, add a try-catch to recover gracefully or alert the user. A fallback or user notification can mitigate silent failures.
-const indexedDB = await openDatabase();
-await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
+let indexedDB;
+try {
+ indexedDB = await openDatabase();
+ await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
+} catch (error) {
+ console.error("Failed to open/sync the database:", error);
+ // Provide fallback or user notification
+}
Likely invalid or redundant comment.
addToSubscription( | ||
dojo.network.toriiClient, | ||
dojo.network.contractComponents as any, | ||
[...filteredStructures.map((structure) => structure.entity_id.toString())], | ||
[structureEntityId.toString()], | ||
dojo.setup.db, | ||
[{ x: position?.x || 0, y: position?.y || 0 }], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance error handling and prevent race conditions
The subscription calls could benefit from more robust error handling and race condition prevention:
- Consider implementing retry logic for failed subscriptions
- Add cleanup logic to handle subscription cancellation on component unmount
- Consider using a more atomic approach to managing subscription state
const fetch = async () => {
+ const subscriptionPromises = [];
try {
- await Promise.all([
- addToSubscription(
+ subscriptionPromises.push(addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[structureEntityId.toString()],
dojo.setup.db,
[{ x: position?.x || 0, y: position?.y || 0 }],
- ),
+ ));
- addToSubscription(
+ subscriptionPromises.push(addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[ADMIN_BANK_ENTITY_ID.toString()],
dojo.setup.db,
- ),
+ ));
+
+ const results = await Promise.allSettled(subscriptionPromises);
+ results.forEach((result, index) => {
+ if (result.status === 'rejected') {
+ console.error(`Subscription ${index} failed:`, result.reason);
+ // Optionally trigger retry logic here
+ }
+ });
} catch (error) {
- console.error("Fetch failed", error);
+ console.error("Subscription setup failed:", error);
+ throw error; // Re-throw to allow parent error boundary to handle
}
};
Also applies to: 157-162
const toriiClient = await createClient({ | ||
rpcUrl: config.rpcUrl, | ||
toriiUrl: config.toriiUrl, | ||
relayUrl: config.relayUrl, | ||
worldAddress: config.worldAddress, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential errors from createClient.
Since createClient is an async call, it might reject on network or configuration issues. Wrapping it in a try-catch or providing error-handling will improve resilience.
-const toriiClient = await createClient({
- rpcUrl: config.rpcUrl,
- toriiUrl: config.toriiUrl,
- relayUrl: config.relayUrl,
- worldAddress: config.worldAddress,
-});
+let toriiClient;
+try {
+ toriiClient = await createClient({
+ rpcUrl: config.rpcUrl,
+ toriiUrl: config.toriiUrl,
+ relayUrl: config.relayUrl,
+ worldAddress: config.worldAddress,
+ });
+} catch (error) {
+ console.error("Failed to create torii client:", error);
+ // Possible fallback or user notification
+}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const toriiClient = await createClient({ | |
rpcUrl: config.rpcUrl, | |
toriiUrl: config.toriiUrl, | |
relayUrl: config.relayUrl, | |
worldAddress: config.worldAddress, | |
}); | |
let toriiClient; | |
try { | |
toriiClient = await createClient({ | |
rpcUrl: config.rpcUrl, | |
toriiUrl: config.toriiUrl, | |
relayUrl: config.relayUrl, | |
worldAddress: config.worldAddress, | |
}); | |
} catch (error) { | |
console.error("Failed to create torii client:", error); | |
// Possible fallback or user notification | |
} |
return { | ||
cancel: () => { | ||
worker.terminate(); | ||
}, | ||
}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Memory leak: Subscriptions not being cleaned up
The review comment is correct. Looking at the implementation in worker.ts
, the setupWorker
function creates two subscriptions using toriiClient.onEntityUpdated
and toriiClient.onEventMessageUpdated
, but the returned cancel
method only terminates the worker without cleaning up these subscriptions. This will lead to memory leaks as the subscriptions will continue to exist even after the worker is terminated.
- The subscriptions should be stored and cleaned up in the
cancel
method:
const entitySub = await toriiClient.onEntityUpdated(...);
const eventSub = await toriiClient.onEventMessageUpdated(...);
return {
cancel: () => {
worker.terminate();
entitySub.unsubscribe(); // Add this
eventSub.unsubscribe(); // Add this
},
};
🔗 Analysis chain
Remember to cancel the subscriptions.
The returned object’s cancel method terminates the worker, but the toriiClient subscriptions remain active. Consider storing and canceling subscriptions as well to fully clean up resources.
Let me check the implementation of the worker setup to understand how these subscriptions are handled.
Let me try a different approach to check the worker implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify references to onEntityUpdated/onEventMessageUpdated subscriptions to see if they are being tracked for cancellation elsewhere.
rg -A 3 "onEntityUpdated|onEventMessageUpdated"
Length of output: 1716
Script:
#!/bin/bash
# Check the full implementation of the worker setup
ast-grep --pattern 'export function setupWorker($_) {
$$$
}'
Length of output: 64
Script:
#!/bin/bash
# Check the content of the worker file
cat client/src/dojo/worker.ts
Length of output: 1829
export async function setupWorker( | ||
config: { rpcUrl: string; toriiUrl: string; relayUrl: string; worldAddress: string }, | ||
components: Component<Schema, Metadata, undefined>[], | ||
entityKeyClause: EntityKeysClause[], | ||
historical: boolean, | ||
logging = false, | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Based on the code analysis, I can now provide the final response:
Add runtime validation for config object properties
The setupWorker
function passes the config directly to createClient
without any validation. While TypeScript provides compile-time type checking, adding runtime validation would prevent potential issues in production:
- The function should validate that all required properties (
rpcUrl
,toriiUrl
,relayUrl
,worldAddress
) are non-empty strings before creating the client - This is particularly important as these values are used to establish critical connections for the worker setup
Example locations needing validation:
client/src/dojo/worker.ts:5-11
: Add validation before thecreateClient
call
🔗 Analysis chain
Validate configuration object structure.
The function expects config properties: rpcUrl, toriiUrl, relayUrl, and worldAddress. Consider adding runtime checks or TypeScript type validations to ensure all properties are provided and correctly typed. This helps prevent runtime errors if any configuration field is missing or malformed.
Let me check the implementation of setupWorker
to see if there are any runtime validations already in place.
Let me try a different approach to check the implementation.
Let me check how the config is used in the worker implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for other references to `setupWorker` calls to verify that the `config` passed in meets the expected shape.
rg -A 5 "setupWorker\("
Length of output: 786
Script:
#!/bin/bash
# Check the implementation of setupWorker for any existing validation
ast-grep --pattern 'export async function setupWorker($_) {
$$$
}'
Length of output: 70
Script:
#!/bin/bash
# Get the content of the worker.ts file to check the implementation
cat client/src/dojo/worker.ts
Length of output: 1829
Script:
#!/bin/bash
# Check the content of entityWorker.js to see how config is handled
cat client/src/dojo/entityWorker.js
Length of output: 848
function handleUpdate(fetchedEntities, data) { | ||
entityBatch[fetchedEntities] = { | ||
...entityBatch[fetchedEntities], | ||
...data, | ||
}; | ||
debouncedSendBatch(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure uniqueness of dictionary key for fetchedEntities.
Using fetchedEntities directly as a property name can be risky if the object is not a string or has unexpected formatting. Consider converting it into a string or generating a unique key to avoid collisions or unexpected indexing.
-function handleUpdate(fetchedEntities, data) {
- entityBatch[fetchedEntities] = {
- ...entityBatch[fetchedEntities],
- ...data,
- };
+function handleUpdate(fetchedEntities, data) {
+ const entityKey = String(fetchedEntities);
+ entityBatch[entityKey] = {
+ ...entityBatch[entityKey],
+ ...data,
+ };
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function handleUpdate(fetchedEntities, data) { | |
entityBatch[fetchedEntities] = { | |
...entityBatch[fetchedEntities], | |
...data, | |
}; | |
debouncedSendBatch(); | |
} | |
function handleUpdate(fetchedEntities, data) { | |
const entityKey = String(fetchedEntities); | |
entityBatch[entityKey] = { | |
...entityBatch[entityKey], | |
...data, | |
}; | |
debouncedSendBatch(); | |
} |
client/src/ui/layouts/World.tsx
Outdated
}; | ||
|
||
fetch(); | ||
}, [structureEntityId, subscriptions, setWorldLoading, setSubscriptions]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove subscriptions from here
}, [filteredStructures, setWorldLoading, setSubscriptions]); | ||
|
||
// Handle market subscription | ||
useEffect(() => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setMarketLoading(true)
), | ||
addToSubscriptionTwoKeyModelbyRealmEntityId( | ||
addToSubscription( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you move this to the same use effect as the market, this should 100% only run once and is not dependent to structureEntityId
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
client/src/dojo/setup.ts (2)
145-146
:⚠️ Potential issueAdd error handling for database operations
The database initialization lacks error handling which could lead to silent failures. Consider wrapping the operations in try-catch blocks and implementing a cleanup mechanism.
259-259
: 🛠️ Refactor suggestionImprove type safety and encapsulation of the database instance
Directly exposing the database instance could lead to misuse. Consider creating a typed interface for database operations.
🧹 Nitpick comments (3)
client/src/dojo/setup.ts (3)
20-72
: Remove commented out codeInstead of keeping commented out code, consider removing it entirely. The old implementation can always be retrieved from the Git history if needed. This will improve code maintainability and readability.
155-155
: Extract magic number to a named constantThe timeout value
40_000
should be extracted to a named constant for better maintainability and documentation.+const ENTITY_QUERY_TIMEOUT = 40_000; ... - 40_000, + ENTITY_QUERY_TIMEOUT,
Line range hint
74-261
: Consider splitting setup into smaller, focused modulesThe
setup
function has grown to handle multiple concerns: network setup, database initialization, entity synchronization, and worker setup. Consider splitting these into separate modules for better maintainability and testing:
setupDatabase.ts
- Database initialization and configurationsetupSync.ts
- Entity synchronization and worker setupsetupNetwork.ts
- Network configuration (already exists)This would make the code more modular and easier to maintain.
client/src/dojo/setup.ts
Outdated
// getEntities( | ||
// network.toriiClient, | ||
// { | ||
// Keys: { | ||
// keys: [undefined], | ||
// pattern_matching: "FixedLen", | ||
// models: [ | ||
// "s0_eternum-AddressName", | ||
// "s0_eternum-Realm", | ||
// "s0_eternum-PopulationConfig", | ||
// "s0_eternum-CapacityConfig", | ||
// "s0_eternum-ProductionConfig", | ||
// "s0_eternum-RealmLevelConfig", | ||
// "s0_eternum-BankConfig", | ||
// "s0_eternum-Bank", | ||
// "s0_eternum-Trade", | ||
// "s0_eternum-Army", | ||
// "s0_eternum-Structure", | ||
// "s0_eternum-Battle", | ||
// "s0_eternum-EntityOwner", | ||
// ], | ||
// }, | ||
// }, | ||
// network.contractComponents as any, | ||
// [], | ||
// [], | ||
// 40_000, | ||
// false, | ||
// { dbConnection: indexedDB, timestampCacheKey: "single_keyed_query" }, | ||
// ), | ||
|
||
// getEntities( | ||
// network.toriiClient, | ||
// { | ||
// Keys: { | ||
// keys: [undefined, undefined], | ||
// pattern_matching: "FixedLen", | ||
// models: ["s0_eternum-CapacityConfigCategory", "s0_eternum-ResourceCost"], | ||
// }, | ||
// }, | ||
// network.contractComponents as any, | ||
// [], | ||
// [], | ||
// 40_000, | ||
// false, | ||
// { dbConnection: indexedDB, timestampCacheKey: "double_keyed_query" }, | ||
// ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Based on the evidence gathered, I can now provide a final response:
Remove commented entity queries
These queries appear to be obsolete as:
- The queries and their associated timestamp cache keys (
single_keyed_query
anddouble_keyed_query
) are not used anywhere else in the codebase - While the same models are referenced in active code within the same file, they are queried in a different structure, suggesting these commented queries represent an old/obsolete querying pattern
- There's no explanatory TODO comment indicating they are temporarily disabled
The commented code should be removed to maintain code cleanliness.
🔗 Analysis chain
Verify if commented queries are still needed
There are two commented out entity queries. If these are no longer needed, they should be removed. If they are needed but temporarily disabled, add a TODO comment explaining why they are commented out and when they should be re-enabled.
Let me check if these queries are used in a different form or if there are similar patterns being used actively in the codebase.
Let me check one more thing to see if these models are being queried in a different way or if there's any recent change history around these queries.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if these queries are used elsewhere in the codebase
rg "single_keyed_query|double_keyed_query" --type ts
Length of output: 262
Script:
#!/bin/bash
# Check for similar query patterns and entity models being used
rg "getEntities.*network\.toriiClient" -A 5 --type ts
# Also check for these specific models being queried elsewhere
rg "s0_eternum-(AddressName|Realm|PopulationConfig|CapacityConfig|ProductionConfig|RealmLevelConfig|BankConfig|Bank|Trade|Army|Structure|Battle|EntityOwner)" --type ts
Length of output: 2389
Script:
#!/bin/bash
# Look for the active usage of these models in the codebase
rg "Keys.*pattern_matching.*FixedLen" -B 5 -A 5 --type ts
# Check recent git history for these files
git log -n 5 -- client/src/dojo/setup.ts
Length of output: 59
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🔭 Outside diff range comments (2)
client/.env.preview (2)
Line range hint
1-2
:⚠️ Critical Security Risk: Exposed Private KeysThe master address and private key should not be committed to version control, even in preview environments. This poses a significant security risk.
Consider:
- Using environment-specific key management services
- Moving sensitive credentials to a secure secret management system
- Adding these lines to
.gitignore
🧰 Tools
🪛 Gitleaks (8.21.2)
5-5: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
Line range hint
26-30
: Verify placeholder URLsMultiple URLs are set to placeholder values (
https://abc.com
). Please ensure these are updated with the correct production URLs before deploying to mainnet:
VITE_PUBLIC_ARK_MARKETPLACE_API
VITE_PUBLIC_IMAGE_CDN_URL
VITE_PUBLIC_IMAGE_PROXY_URL
VITE_PUBLIC_IPFS_GATEWAY
🧰 Tools
🪛 Gitleaks (8.21.2)
5-5: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
♻️ Duplicate comments (1)
client/src/dojo/entityWorker.js (1)
24-30
:⚠️ Potential issueAdd input validation and prevent prototype pollution.
The update handler needs several improvements:
- Validate input parameters
- Use a safer method for object property access
- Ensure key uniqueness (as previously mentioned)
function handleUpdate(fetchedEntities, data) { + if (!fetchedEntities || !data) { + throw new Error('Missing required parameters'); + } + + const entityKey = String(fetchedEntities); + + // Use Object.create(null) to prevent prototype pollution + if (!entityBatch[entityKey]) { + entityBatch[entityKey] = Object.create(null); + } + - entityBatch[fetchedEntities] = { - ...entityBatch[fetchedEntities], + entityBatch[entityKey] = { + ...entityBatch[entityKey], ...data, }; + debouncedSendBatch(); }
🧹 Nitpick comments (2)
client/src/dojo/indexedDB.ts (1)
8-31
: Ensure database versioning strategy is robust.
Currently, the database is upgraded only when the DB_VERSION is incremented. If future migrations require changes to the “entities” schema, you may need more sophisticated logic within “onupgradeneeded” to handle schema evolution (e.g., splitting data into multiple stores, adding indexes). Consider adding checks or versioning strategies to allow for controlled migrations.client/src/dojo/entityWorker.js (1)
1-5
: Consider improving global state management.The current implementation uses mutable global variables which could lead to unexpected behavior in a worker context. Consider these improvements:
- Make
DEBOUNCE_DELAY
configurable via worker initialization message- Consider using a more structured state management approach
import { debounce } from "lodash"; -let entityBatch = {}; -let logging = false; -const DEBOUNCE_DELAY = 1000; +class EntityWorkerState { + constructor(config = {}) { + this.entityBatch = {}; + this.logging = false; + this.debounceDelay = config.debounceDelay || 1000; + } +} + +const state = new EntityWorkerState();
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
client/.env.preview
(1 hunks)client/src/dojo/entityWorker.js
(1 hunks)client/src/dojo/indexedDB.ts
(1 hunks)
🔇 Additional comments (3)
client/src/dojo/indexedDB.ts (1)
70-104
: Consolidate transaction-level error handling to capture partial successes.
This mirrors a past review comment: partial successes can be tricky in large batch inserts, where one record fails while others succeed. It may be beneficial to track which IDs fail to persist and return this information to upper layers, offering more transparent error reporting and improved debugging.
client/.env.preview (2)
Line range hint 32-33
: Verify fee recipient configuration
Since this is a mainnet configuration, please ensure the VRF provider and client fee recipient addresses are correct as these will handle real transactions.
✅ Verification successful
Let me gather more context about these addresses by checking their usage and configuration patterns.
Let me try one more search to get the surrounding context of these addresses with a different approach.
Fee recipient and VRF provider addresses appear to be correctly configured
Based on the verification results:
- The fee recipient address (0x045c...) is consistently used as a burner address across the SDK constants and bridge configuration
- The VRF provider address (0x051f...) is properly configured in the starknet provider context for random number generation
- Both addresses are used in appropriate contexts with clear documentation and purpose
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for fee recipient address usage
# Look for fee recipient address references
echo "Checking fee recipient usage..."
rg "0x045c587318c9ebcf2fbe21febf288ee2e3597a21cd48676005a5770a50d433c5"
# Look for VRF provider address references
echo "Checking VRF provider usage..."
rg "0x051fea4450da9d6aee758bdeba88b2f665bcbf549d2c61421aa724e9ac0ced8f"
Length of output: 695
Script:
#!/bin/bash
# Check for any configuration files and environment files that might contain these addresses
echo "Checking configuration files..."
fd -e json -e env -e yaml -e toml -X grep -l "0x045c587318c9ebcf2fbe21febf288ee2e3597a21cd48676005a5770a50d433c5\|0x051fea4450da9d6aee758bdeba88b2f665bcbf549d2c61421aa724e9ac0ced8f"
# Check the context around these addresses in the found files
echo -e "\nChecking context in source files..."
rg -B 2 -A 2 "0x045c587318c9ebcf2fbe21febf288ee2e3597a21cd48676005a5770a50d433c5\|0x051fea4450da9d6aee758bdeba88b2f665bcbf549d2c61421aa724e9ac0ced8f"
Length of output: 495
Script:
#!/bin/bash
# Check the context around the fee recipient in the SDK constants
echo "Checking SDK constants context..."
rg -B 5 -A 5 "VELORDS_FEE_RECIPIENT" sdk/packages/eternum/src/constants/global.ts
# Check the context of the VRF provider in the starknet provider file
echo -e "\nChecking Starknet provider context..."
rg -B 5 -A 5 "0x051fea4450da9d6aee758bdeba88b2f665bcbf549d2c61421aa724e9ac0ced8f" client/src/hooks/context/starknet-provider.tsx
# Check config context
echo -e "\nChecking config context..."
rg -B 5 -A 5 "velords_fee_recipient" config/index.ts
Length of output: 2473
🧰 Tools
🪛 Gitleaks (8.21.2)
5-5: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
6-7
: Verify mainnet configuration consistency
The transition to mainnet involves multiple related configuration changes. Please ensure all components are properly configured for mainnet:
- TORII endpoint:
mainnet-cache-2
- Node URL:
mainnet
- TORII relay:
mainnet-cache-2
- Chain and slot:
mainnet
Also applies to: 12-12, 17-18
✅ Verification successful
Let me try a different approach to verify the configuration consistency.
Configuration appears consistent for mainnet deployment
Based on the codebase analysis, the mainnet configuration is consistently implemented across the relevant files:
- The TORII endpoint (
mainnet-cache-2
) and Node URL (mainnet
) in.env.preview
align with the mainnet configuration used throughout the codebase - Chain configuration is properly handled through environment variables and correctly switches between mainnet and other environments
- The Starknet provider is properly configured to use mainnet chains when
VITE_PUBLIC_CHAIN === "mainnet"
- No inconsistent or conflicting sepolia configurations remain in the production paths
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent mainnet configuration across the codebase
# Check for any remaining sepolia references
echo "Checking for any remaining sepolia references..."
rg -i "sepolia" --type ts --type tsx --type json
# Check for consistent mainnet-cache-2 usage
echo "Checking mainnet-cache-2 configuration consistency..."
rg -i "mainnet-cache-[0-9]+" --type ts --type tsx --type json
# Look for any environment loading patterns
echo "Checking environment variable usage..."
ast-grep --pattern 'process.env.VITE_PUBLIC_CHAIN'
ast-grep --pattern 'process.env.VITE_PUBLIC_TORII'
Length of output: 581
Script:
#!/bin/bash
# Search for environment files and their content
echo "Checking all .env files..."
fd -e env -x cat {}
# Search for configuration related terms
echo "\nChecking configuration patterns..."
rg -i "(mainnet|sepolia|goerli|testnet)" --type=ts --type=js --type=json
# Search for URL patterns in the codebase
echo "\nChecking URL patterns..."
rg -i "(api\.cartridge\.gg|torii)" --type=ts --type=js --type=json
Length of output: 12319
return new Promise((resolve, reject) => { | ||
const transaction = dbConnection.transaction(["entities"], "readonly"); | ||
const store = transaction.objectStore("entities"); | ||
const request = store.getAll(); | ||
|
||
request.onsuccess = () => { | ||
const entities = request.result; | ||
|
||
const CHUNK_SIZE = 50000; | ||
|
||
// Process entities in chunks | ||
for (let i = 0; i < entities.length; i += CHUNK_SIZE) { | ||
const chunk = entities.slice(i, i + CHUNK_SIZE); | ||
const chunkMap: Entities = {}; | ||
|
||
for (const entity of chunk) { | ||
const { id, ...data } = entity; | ||
chunkMap[id] = data; | ||
} | ||
|
||
setEntities(chunkMap, components, false); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider streaming or asynchronous iteration for large datasets.
Processing 50,000 entries per chunk may be memory-intensive for large databases. If the dataset grows significantly, consider streaming or an asynchronous iteration approach (e.g., openCursor) to manage memory usage more gracefully.
async function clearCache() { | ||
Object.keys(localStorage) | ||
.filter((x) => x.endsWith("_query")) | ||
.forEach((x) => localStorage.removeItem(x)); | ||
|
||
indexedDB.deleteDatabase(DB_NAME); | ||
location.reload(); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Reevaluate forcing page reload after cache removal.
Calling “location.reload()” may disrupt users if done unexpectedly (for instance, if triggered by a background or user-driven action). Depending on your application’s workflow, consider a more graceful approach—such as notifying the user that a reload is needed, or elegantly reinitializing the UI after clearing the cache.
let debouncedSendBatch = debounce(() => { | ||
if (Object.keys(entityBatch).length > 0) { | ||
console.log("Worker: Sending batch", entityBatch); | ||
self.postMessage({ updatedEntities: entityBatch }); | ||
entityBatch = {}; | ||
} | ||
}, DEBOUNCE_DELAY); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and consistent logging.
The batch sending logic needs improvement in error handling and logging consistency:
- Console.log should respect the logging flag
- Add error handling for postMessage
- Consider using
const
for the debounced function
-let debouncedSendBatch = debounce(() => {
+const debouncedSendBatch = debounce(() => {
if (Object.keys(entityBatch).length > 0) {
- console.log("Worker: Sending batch", entityBatch);
+ if (logging) console.log("Worker: Sending batch", entityBatch);
+ try {
self.postMessage({ updatedEntities: entityBatch });
entityBatch = {};
+ } catch (error) {
+ if (logging) console.error("Worker: Failed to send batch", error);
+ }
}
}, DEBOUNCE_DELAY);
Committable suggestion skipped: line range outside the PR's diff.
self.onmessage = async (e) => { | ||
const { type, entities, logging: logFlag } = e.data; | ||
if (type === "update") { | ||
logging = logFlag; | ||
if (logging) console.log("Worker: Received entities update"); | ||
handleUpdate(entities.fetchedEntities, entities.data); | ||
} | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation and error handling for messages.
The message handler needs proper validation and error handling:
- Remove unnecessary async since there are no await operations
- Add validation for required message properties
- Add error handling for malformed messages
-self.onmessage = async (e) => {
+self.onmessage = (e) => {
+ try {
+ if (!e.data || typeof e.data !== 'object') {
+ throw new Error('Invalid message format');
+ }
+
const { type, entities, logging: logFlag } = e.data;
+
+ if (!type || !entities) {
+ throw new Error('Missing required message properties');
+ }
+
if (type === "update") {
logging = logFlag;
if (logging) console.log("Worker: Received entities update");
handleUpdate(entities.fetchedEntities, entities.data);
}
+ } catch (error) {
+ if (logging) console.error("Worker: Message processing error", error);
+ self.postMessage({ error: error.message });
+ }
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
self.onmessage = async (e) => { | |
const { type, entities, logging: logFlag } = e.data; | |
if (type === "update") { | |
logging = logFlag; | |
if (logging) console.log("Worker: Received entities update"); | |
handleUpdate(entities.fetchedEntities, entities.data); | |
} | |
}; | |
self.onmessage = (e) => { | |
try { | |
if (!e.data || typeof e.data !== 'object') { | |
throw new Error('Invalid message format'); | |
} | |
const { type, entities, logging: logFlag } = e.data; | |
if (!type || !entities) { | |
throw new Error('Missing required message properties'); | |
} | |
if (type === "update") { | |
logging = logFlag; | |
if (logging) console.log("Worker: Received entities update"); | |
handleUpdate(entities.fetchedEntities, entities.data); | |
} | |
} catch (error) { | |
if (logging) console.error("Worker: Message processing error", error); | |
self.postMessage({ error: error.message }); | |
} | |
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
client/src/ui/layouts/World.tsx (1)
206-220
: 🛠️ Refactor suggestionAdd cleanup and improve market subscription handling
The market subscription needs similar improvements for proper cleanup and error handling.
Consider this improved implementation:
useEffect(() => { + let isSubscribed = true; + setMarketLoading(true); // Moved outside async function as per previous review const fetch = async () => { - setMarketLoading(true); try { - await addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, dojo.setup.db); + const result = await addMarketSubscription( + dojo.network.toriiClient, + dojo.network.contractComponents as any, + dojo.setup.db + ); + if (!isSubscribed) return; } catch (error) { console.error("Market fetch failed", error); } finally { - setMarketLoading(false); + if (isSubscribed) { + setMarketLoading(false); + } } }; fetch(); + + return () => { + isSubscribed = false; + }; }, []);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
client/src/ui/layouts/World.tsx
(3 hunks)
🔇 Additional comments (2)
client/src/ui/layouts/World.tsx (2)
Line range hint 119-167
: Enhance subscription management and error handling
The subscription management could be improved in several ways:
- Missing cleanup function to handle component unmount
- Basic error handling without retry mechanism
- Potential race condition with multiple setState calls
Consider implementing this improved version:
useEffect(() => {
if (
!structureEntityId ||
subscriptions[structureEntityId.toString()] ||
subscriptions[ADMIN_BANK_ENTITY_ID.toString()] ||
structureEntityId === 999999999
) {
return;
}
+ let isSubscribed = true;
const position = getComponentValue(
dojo.setup.components.Position,
getEntityIdFromKeys([BigInt(structureEntityId)]),
);
setWorldLoading(true);
- setSubscriptions((prev) => ({
- ...prev,
- [structureEntityId.toString()]: true,
- [ADMIN_BANK_ENTITY_ID.toString()]: true,
- }));
const fetch = async () => {
+ const subscriptionPromises = [];
try {
- await Promise.all([
- addToSubscription(
+ subscriptionPromises.push(addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[structureEntityId.toString()],
dojo.setup.db,
[{ x: position?.x || 0, y: position?.y || 0 }],
- ),
+ ));
- addToSubscription(
+ subscriptionPromises.push(addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[ADMIN_BANK_ENTITY_ID.toString()],
dojo.setup.db,
- ),
+ ));
+ const results = await Promise.allSettled(subscriptionPromises);
+
+ if (isSubscribed) {
+ const successfulSubscriptions = {};
+ results.forEach((result, index) => {
+ if (result.status === 'fulfilled') {
+ const entityId = index === 0 ? structureEntityId.toString() : ADMIN_BANK_ENTITY_ID.toString();
+ successfulSubscriptions[entityId] = true;
+ } else {
+ console.error(`Subscription ${index} failed:`, result.reason);
+ }
+ });
+
+ setSubscriptions(prev => ({
+ ...prev,
+ ...successfulSubscriptions
+ }));
+ }
} catch (error) {
console.error("Fetch failed", error);
} finally {
- setWorldLoading(false);
+ if (isSubscribed) {
+ setWorldLoading(false);
+ }
}
};
fetch();
+
+ return () => {
+ isSubscribed = false;
+ };
}, [structureEntityId, subscriptions]);
6-6
: Verify consistent subscription implementation across components
Let's verify that the subscription pattern is consistently implemented across all components using these functions.
// Handle filtered structures subscription | ||
useEffect(() => { | ||
if (filteredStructures.length === 0) return; | ||
|
||
setWorldLoading(true); | ||
setSubscriptions((prev) => ({ | ||
...prev, | ||
...Object.fromEntries(filteredStructures.map((structure) => [structure.entity_id.toString(), true])), | ||
})); | ||
|
||
const fetch = async () => { | ||
try { | ||
await Promise.all([ | ||
addToSubscriptionOneKeyModelbyRealmEntityId( | ||
dojo.network.toriiClient, | ||
dojo.network.contractComponents as any, | ||
[structureEntityId.toString()], | ||
[{ x: position?.x || 0, y: position?.y || 0 }], | ||
filteredStructures.map((structure) => structure.entity_id.toString()), | ||
dojo.setup.db, | ||
), | ||
addToSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, [ | ||
ADMIN_BANK_ENTITY_ID.toString(), | ||
]), | ||
addToSubscription( | ||
dojo.network.toriiClient, | ||
dojo.network.contractComponents as any, | ||
[...filteredStructures.map((structure) => structure.entity_id.toString())], | ||
[...filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y }))], | ||
filteredStructures.map((structure) => structure.entity_id.toString()), | ||
dojo.setup.db, | ||
filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y })), | ||
), | ||
addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any), | ||
]); | ||
} catch (error) { | ||
console.error("Fetch failed", error); | ||
} finally { | ||
setWorldLoading(false); | ||
} | ||
}; | ||
|
||
fetch(); | ||
}, [filteredStructures]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consolidate subscription logic and prevent memory leaks
The filtered structures subscription has similar issues to the initial subscription and introduces additional complexity with duplicate subscription calls.
Consider consolidating the subscription logic and implementing proper cleanup:
useEffect(() => {
if (filteredStructures.length === 0) return;
+ let isSubscribed = true;
setWorldLoading(true);
- setSubscriptions((prev) => ({
- ...prev,
- ...Object.fromEntries(filteredStructures.map((structure) => [structure.entity_id.toString(), true])),
- }));
const fetch = async () => {
+ const entityIds = filteredStructures.map(structure => structure.entity_id.toString());
+ const positions = filteredStructures.map(structure => ({
+ x: structure.position.x,
+ y: structure.position.y
+ }));
try {
- await Promise.all([
- addToSubscriptionOneKeyModelbyRealmEntityId(
+ const results = await Promise.allSettled([
+ addToSubscriptionOneKeyModelbyRealmEntityId(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
- filteredStructures.map((structure) => structure.entity_id.toString()),
+ entityIds,
dojo.setup.db,
),
- addToSubscription(
+ addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
- filteredStructures.map((structure) => structure.entity_id.toString()),
+ entityIds,
dojo.setup.db,
- filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y })),
+ positions,
),
]);
+ if (isSubscribed) {
+ const successfulSubscriptions = {};
+ results.forEach((result, index) => {
+ if (result.status === 'fulfilled') {
+ entityIds.forEach(id => {
+ successfulSubscriptions[id] = true;
+ });
+ } else {
+ console.error(`Batch subscription ${index} failed:`, result.reason);
+ }
+ });
+
+ setSubscriptions(prev => ({
+ ...prev,
+ ...successfulSubscriptions
+ }));
+ }
} catch (error) {
console.error("Fetch failed", error);
} finally {
- setWorldLoading(false);
+ if (isSubscribed) {
+ setWorldLoading(false);
+ }
}
};
fetch();
+
+ return () => {
+ isSubscribed = false;
+ };
}, [filteredStructures]);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Handle filtered structures subscription | |
useEffect(() => { | |
if (filteredStructures.length === 0) return; | |
setWorldLoading(true); | |
setSubscriptions((prev) => ({ | |
...prev, | |
...Object.fromEntries(filteredStructures.map((structure) => [structure.entity_id.toString(), true])), | |
})); | |
const fetch = async () => { | |
try { | |
await Promise.all([ | |
addToSubscriptionOneKeyModelbyRealmEntityId( | |
dojo.network.toriiClient, | |
dojo.network.contractComponents as any, | |
[structureEntityId.toString()], | |
[{ x: position?.x || 0, y: position?.y || 0 }], | |
filteredStructures.map((structure) => structure.entity_id.toString()), | |
dojo.setup.db, | |
), | |
addToSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, [ | |
ADMIN_BANK_ENTITY_ID.toString(), | |
]), | |
addToSubscription( | |
dojo.network.toriiClient, | |
dojo.network.contractComponents as any, | |
[...filteredStructures.map((structure) => structure.entity_id.toString())], | |
[...filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y }))], | |
filteredStructures.map((structure) => structure.entity_id.toString()), | |
dojo.setup.db, | |
filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y })), | |
), | |
addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any), | |
]); | |
} catch (error) { | |
console.error("Fetch failed", error); | |
} finally { | |
setWorldLoading(false); | |
} | |
}; | |
fetch(); | |
}, [filteredStructures]); | |
// Handle filtered structures subscription | |
useEffect(() => { | |
if (filteredStructures.length === 0) return; | |
let isSubscribed = true; | |
setWorldLoading(true); | |
const fetch = async () => { | |
const entityIds = filteredStructures.map(structure => structure.entity_id.toString()); | |
const positions = filteredStructures.map(structure => ({ | |
x: structure.position.x, | |
y: structure.position.y | |
})); | |
try { | |
const results = await Promise.allSettled([ | |
addToSubscriptionOneKeyModelbyRealmEntityId( | |
dojo.network.toriiClient, | |
dojo.network.contractComponents as any, | |
entityIds, | |
dojo.setup.db, | |
), | |
addToSubscription( | |
dojo.network.toriiClient, | |
dojo.network.contractComponents as any, | |
entityIds, | |
dojo.setup.db, | |
positions, | |
), | |
]); | |
if (isSubscribed) { | |
const successfulSubscriptions = {}; | |
results.forEach((result, index) => { | |
if (result.status === 'fulfilled') { | |
entityIds.forEach(id => { | |
successfulSubscriptions[id] = true; | |
}); | |
} else { | |
console.error(`Batch subscription ${index} failed:`, result.reason); | |
} | |
}); | |
setSubscriptions(prev => ({ | |
...prev, | |
...successfulSubscriptions | |
})); | |
} | |
} catch (error) { | |
console.error("Fetch failed", error); | |
} finally { | |
if (isSubscribed) { | |
setWorldLoading(false); | |
} | |
} | |
}; | |
fetch(); | |
return () => { | |
isSubscribed = false; | |
}; | |
}, [filteredStructures]); |
Summary by CodeRabbit
Release Notes
New Features
ProgressBar
component for better visual representation of progress.Improvements
Bug Fixes