Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cache #2558

Open
wants to merge 6 commits into
base: next
Choose a base branch
from
Open

cache #2558

wants to merge 6 commits into from

Conversation

ponderingdemocritus
Copy link
Contributor

@ponderingdemocritus ponderingdemocritus commented Dec 18, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced functionality for managing an IndexedDB database.
    • Added a "clear cache" button in the settings interface to clear local storage and reload the application.
    • Implemented a new ProgressBar component for better visual representation of progress.
  • Improvements

    • Enhanced subscription management by integrating database parameters across various components.
    • Improved data handling capabilities in the setup process with new database interactions.
    • Updated deployment configurations and versioning for the application.
    • Transitioned network configuration from Sepolia to Mainnet.
  • Bug Fixes

    • Removed obsolete function to streamline entity fetching logic.

Copy link

vercel bot commented Dec 18, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
eternum ✅ Ready (Inspect) Visit Preview 💬 Add feedback Dec 18, 2024 9:22am
eternum-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Dec 18, 2024 9:22am
1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
eternum-landing ⬜️ Ignored (Inspect) Visit Preview Dec 18, 2024 9:22am

Copy link
Contributor

coderabbitai bot commented Dec 18, 2024

Warning

Rate limit exceeded

@ponderingdemocritus has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 25 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 2b97511 and 096fb90.

📒 Files selected for processing (1)
  • client/src/dojo/setup.ts (3 hunks)

Walkthrough

This pull request introduces comprehensive updates to the Eternum client's data management and caching infrastructure. The changes primarily focus on integrating IndexedDB functionality across the application, adding a new indexedDB.ts module to handle database operations. Multiple components have been modified to incorporate database connection parameters in subscription functions, enabling more robust data synchronization and local storage management. The updates span package dependency versions, query handling, and user interface components, with a particular emphasis on improving data retrieval and caching mechanisms.

Changes

File Change Summary
client/package.json Updated @dojoengine/* package dependencies from 1.0.4-alpha.3.1.0 to 1.0.4-alpha.3.1.1
client/src/dojo/indexedDB.ts New file introducing IndexedDB management functions: openDatabase, syncEntitiesFromStorage, insertEntitiesInDB, and clearCache
client/src/dojo/queries.ts Removed syncPosition function, updated subscription functions to include database connection parameter
client/src/dojo/setup.ts Integrated database connection and synchronization logic
client/src/ui/components/* Updated subscription function calls to include dojo.setup.db parameter in multiple components
client/src/ui/modules/settings/Settings.tsx Added "clear cache" button invoking clearCache function
client/.env.preview Updated environment variables for Sepolia network deployment from sepolia-rc-18 to sepolia-rc-19
client/src/dojo/entityWorker.js New file for handling entity updates in a web worker with debouncing
client/src/dojo/worker.ts Added setupWorker function for initializing web worker with configuration
client/src/ui/modules/navigation/TopLeftNavigation.tsx Introduced ProgressBar component and refactored progress display logic
scripts/deploy.sh Updated deployment script for new version v1.0.8 and environment variable handling

Possibly related PRs

Suggested Reviewers

  • edisontim
  • bob0005

Poem

🐰 In the realm of data's flow,
A rabbit hops through IndexedDB's glow
Caching entities with gentle care
Synchronizing worlds beyond compare
Cache cleared, performance takes flight! 🚀


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

mentatbot bot commented Dec 18, 2024

You are out of MentatBot reviews. Your usage will refresh December 23 at 08:00 AM.

Copy link

Failed to generate code suggestions for PR

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (9)
client/package.json (1)

22-29: Consider documenting alpha dependency requirements

Since the project relies on specific alpha versions of @dojoengine packages for the caching implementation, consider:

  1. Adding a note in the README about the alpha dependency requirements
  2. Documenting any known limitations or issues
  3. Creating an upgrade plan for when stable versions become available
client/src/dojo/indexedDB.ts (2)

8-31: Consider adding defensive code or a fallback mechanism for database version conflicts.
If an older version of the database or a user with a stale browser tries to open the database, a version conflict could occur. While this implementation is acceptable, explicitly handling such scenarios (e.g., bridging from older versions to newer versions via migrations) can help prevent unexpected upgrade failures.


99-106: Avoid forcing a page reload after cache deletion.
Reloading helps ensure a clean state, but it might feel disruptive to the user experience. You could consider a more graceful approach to clearing the UI state without a reload—e.g., re-initializing your data or re-running your subscription logic. That way, users can continue their workflow right away.

client/src/dojo/queries.ts (4)

11-11: Use distinct transaction keys for separate subscription calls.
Right now, the subscription call for the two-key model uses the cache key 'entity_two_key_${entityID}_query'. If many entity IDs are given, this might result in collisions or ambiguous logs. You could more thoroughly separate them, for example by joining them with a delimiter or hashing the entity list.


42-42: Document possible concurrency issues.
Multiple addToSubscriptionOneKeyModelbyRealmEntityId calls could run concurrently inserting or fetching from IndexedDB. While IndexedDB handles concurrency, documenting or illustrating your concurrency approach will help future maintainers.


64-65: Reuse or refactor repeated patterns.
The parameter structure—like dbConnection: db, timestampCacheKey: ...—is repeated across multiple subscription functions. Standardizing or reusing a helper function can reduce duplication and ensure uniform logging, caching, and error-handling patterns.


133-133: Investigate time-based and event-based caching strategies.
Setting a timestampCacheKey is a good start. Consider an event-based or more advanced time-based invalidation scheme to align with data volatility in your app. This might help ensure your store stays fresh without incurring an abundance of queries.

client/src/ui/components/resources/InventoryResources.tsx (1)

54-59: Cache invalidation approach may be overly simplistic.
Currently, you store a timestamp and skip the subscription if the data is still considered “fresh.” While this optimizes performance, it might result in stale data if an entity’s resources change quickly. Consider adding an event-based or partial invalidation approach for critical real-time features.

client/src/dojo/setup.ts (1)

Line range hint 122-161: Consider extracting configuration values

The timeout values and cache keys are hardcoded. Consider moving these to a configuration object for better maintainability.

+ const CONFIG = {
+   QUERY_TIMEOUT: 40_000,
+   CACHE_KEYS: {
+     CONFIG: 'config_query',
+     SINGLE_KEYED: 'single_keyed_query',
+     DOUBLE_KEYED: 'double_keyed_query'
+   }
+ };

  await getEntities(
    network.toriiClient,
    { Composite: { operator: "Or", clauses: configClauses } },
    network.contractComponents as any,
    [],
    [],
-   40_000,
+   CONFIG.QUERY_TIMEOUT,
    false,
-   { dbConnection: indexedDB, timestampCacheKey: "config_query" },
+   { dbConnection: indexedDB, timestampCacheKey: CONFIG.CACHE_KEYS.CONFIG },
  );
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3a505c8 and 6a47469.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (8)
  • client/package.json (1 hunks)
  • client/src/dojo/indexedDB.ts (1 hunks)
  • client/src/dojo/queries.ts (6 hunks)
  • client/src/dojo/setup.ts (5 hunks)
  • client/src/ui/components/resources/InventoryResources.tsx (1 hunks)
  • client/src/ui/components/trading/ResourceArrivals.tsx (1 hunks)
  • client/src/ui/layouts/World.tsx (1 hunks)
  • client/src/ui/modules/settings/Settings.tsx (3 hunks)
🔇 Additional comments (6)
client/package.json (1)

22-29: Verify changelog and compatibility for @dojoengine packages

The update from 1.0.4-alpha.3.1.0 to 1.0.4-alpha.3.1.1 across all @dojoengine packages should be verified for breaking changes, especially given their alpha status and the new IndexedDB integration.

client/src/dojo/indexedDB.ts (1)

33-61: Validate entity IDs and data before storing.
When fetching data from the store, ensure that the data structure expected by your application aligns with what’s actually in request.result. A quick validation or type-check can protect against malformed data in the "entities" store (e.g., if an entry's id property is missing or not fitting the expected format).

client/src/ui/components/trading/ResourceArrivals.tsx (1)

32-37: Ensure consistent error handling when database connection is unavailable.
By adding the dojo.setup.db parameter, there’s a new dependency on a valid IDBDatabase. If dojo.setup.db is null or otherwise undefined, the function call could fail. A fallback or a guard that checks dojo.setup.db before calling addToSubscription will improve resilience.

client/src/dojo/queries.ts (3)

33-34: Check for empty arrays before constructing clauses.
When entityID is empty, this may create an empty set of clauses. Handle this edge case gracefully (e.g., skip the query or log a message) to avoid unnecessary queries.


73-73: Check for user confusion with optional 'position' parameter.
Here you expand the subscription to handle position as well. Ensure that calling code is aware that position changes the query keys. You might consider splitting this out to a separate function or clarifying how it works with an in-code comment.


116-116: Provide fallback logic for db if needed.
The db argument is generally required. If this function ends up being called in a context without a valid IndexedDB instance, you might want to fail gracefully or skip local caching.

Comment on lines +63 to +97
async function insertEntitiesInDB(db: IDBDatabase, entities: Entities): Promise<void> {
return new Promise((resolve, reject) => {
const transaction = db.transaction(["entities"], "readwrite");
const store = transaction.objectStore("entities");

let error: Error | null = null;

// Handle transaction completion
transaction.oncomplete = () => {
if (error) {
reject(error);
} else {
resolve();
}
};

transaction.onerror = () => {
reject(transaction.error);
};

// Store each entity
for (const [entityId, data] of Object.entries(entities)) {
const entityData = {
id: entityId,
...data,
};

const request = store.put(entityData);

request.onerror = () => {
error = request.error;
};
}
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consolidate transaction-level error handling.
Here, you track write errors per entity but also rely on the top-level transaction error event. In some scenarios, partial success with multiple items can be tricky. Consider capturing partial success more explicitly (e.g., storing or returning the IDs that fail to persist). This can provide better feedback in case a subset of entities fails to be inserted.

Comment on lines +106 to +107
false,
{ dbConnection: db, timestampCacheKey: `entity_${entityID}_query` },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ensure stable or unique cache keys for multiple entities.
When an array of entities is passed, the interpolation 'entity_${entityID}_query' might produce unexpected collisions if entityID is itself an array. Consider a more robust approach, such as JSON-stringifying the array or hashing it.

Comment on lines +118 to +119
const indexedDB = await openDatabase();
await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for database operations

The database initialization lacks error handling which could lead to silent failures. Consider wrapping the operations in try-catch blocks and implementing a cleanup mechanism.

-  const indexedDB = await openDatabase();
-  await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
+  let indexedDB;
+  try {
+    indexedDB = await openDatabase();
+    await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
+  } catch (error) {
+    console.error('Failed to initialize database:', error);
+    // Implement fallback mechanism or user notification
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const indexedDB = await openDatabase();
await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
let indexedDB;
try {
indexedDB = await openDatabase();
await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
} catch (error) {
console.error('Failed to initialize database:', error);
// Implement fallback mechanism or user notification
}

Comment on lines 219 to 221
eventSync,
db: indexedDB,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve type safety and encapsulation of the database instance

Directly exposing the database instance could lead to misuse. Consider creating a typed interface for database operations.

+ interface DatabaseOperations {
+   connection: IDBDatabase;
+   // Add specific database operation methods here
+ }

  return {
    network,
    components,
    systemCalls,
    sync,
    eventSync,
-   db: indexedDB,
+   db: {
+     connection: indexedDB,
+     // Implement specific database operation methods
+   } as DatabaseOperations,
  };

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +175 to +183
<Button
onClick={() => {
clearCache();
setShowSettings(false);
setBlankOverlay(true);
}}
>
clear cache
</Button>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance user experience for cache clearing operation

The cache clearing operation lacks user confirmation and feedback. Consider adding these safety measures and improving the user experience.

  <Button
    onClick={() => {
+     if (!window.confirm('Are you sure you want to clear the cache? This will reload the page.')) {
+       return;
+     }
+     try {
        clearCache();
        setShowSettings(false);
        setBlankOverlay(true);
+       // Show success message before reload
+       toast.success('Cache cleared successfully');
+     } catch (error) {
+       toast.error('Failed to clear cache');
+       console.error('Cache clear failed:', error);
+     }
    }}
  >
-   clear cache
+   Clear Cache
  </Button>

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 153 to 184
[...filteredStructures.map((structure) => structure.entity_id.toString())],
dojo.setup.db,
),
addToSubscriptionTwoKeyModelbyRealmEntityId(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[...filteredStructures.map((structure) => structure.entity_id.toString())],
dojo.setup.db,
),
addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[structureEntityId.toString()],
dojo.setup.db,
[{ x: position?.x || 0, y: position?.y || 0 }],
),
addToSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, [
ADMIN_BANK_ENTITY_ID.toString(),
]),

addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[ADMIN_BANK_ENTITY_ID.toString()],
dojo.setup.db,
),

addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[...filteredStructures.map((structure) => structure.entity_id.toString())],
dojo.setup.db,
[...filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y }))],
),
addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any),
addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, dojo.setup.db),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve subscription reliability and performance

The subscription logic could benefit from retry mechanisms and better error handling. Consider implementing batching for better performance.

+ const MAX_RETRIES = 3;
+ const BATCH_SIZE = 5;

+ async function withRetry(operation: () => Promise<any>, retries = MAX_RETRIES) {
+   try {
+     return await operation();
+   } catch (error) {
+     if (retries > 0) {
+       console.warn(`Retrying operation, ${retries} attempts remaining`);
+       return withRetry(operation, retries - 1);
+     }
+     throw error;
+   }
+ }

  const fetch = async () => {
    try {
+     // Batch subscriptions for better performance
+     const subscriptionTasks = [
+       withRetry(() => addToSubscriptionOneKeyModelbyRealmEntityId(/*...*/)),
+       withRetry(() => addToSubscriptionTwoKeyModelbyRealmEntityId(/*...*/)),
+       // ... other subscription tasks
+     ];
+     
+     for (let i = 0; i < subscriptionTasks.length; i += BATCH_SIZE) {
+       await Promise.all(subscriptionTasks.slice(i, i + BATCH_SIZE));
+     }
    } catch (error) {
      console.error("Fetch failed", error);
+     // Implement proper error recovery
    }
  };

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (12)
client/src/ui/modules/navigation/TopLeftNavigation.tsx (2)

332-341: Add ARIA attributes for accessibility

While the implementation is clean, consider enhancing accessibility by adding ARIA attributes.

 const ProgressBar = memo(({ progress }: { progress: number }) => {
   return (
     <div
+      role="progressbar"
+      aria-valuenow={progress}
+      aria-valuemin={0}
+      aria-valuemax={100}
       className="absolute bottom-0 left-0 h-1 bg-gold to-transparent rounded-bl-2xl rounded-tr-2xl mx-1"
       style={{ width: `${progress}%` }}
     ></div>
   );
 });

Line range hint 343-407: Consider user preferences for sound effects

The sound effect implementation is good, but consider adding a user preference setting for audio feedback.

Consider implementing a user preference hook:

const useUserPreferences = () => {
  const [soundEnabled, setSoundEnabled] = useState(() => 
    localStorage.getItem('soundEnabled') === 'true'
  );
  
  const toggleSound = useCallback(() => {
    setSoundEnabled(prev => {
      const newValue = !prev;
      localStorage.setItem('soundEnabled', String(newValue));
      return newValue;
    });
  }, []);

  return { soundEnabled, toggleSound };
};

Then update the sound effect logic:

 useEffect(() => {
   if (lastProgressRef.current > progress) {
-    play();
+    if (soundEnabled) {
+      play();
+    }
   }
   lastProgressRef.current = progress;
- }, [progress, play]);
+ }, [progress, play, soundEnabled]);
scripts/deploy.sh (1)

Line range hint 1-71: Consider restructuring the deployment script for better maintainability

The script handles multiple deployment stages but could benefit from better organization and error handling:

  1. Consider breaking down the script into separate modules:
scripts/
  ├── deploy/
  │   ├── season_pass.sh
  │   ├── eternum.sh
  │   ├── season_resources.sh
  │   └── slot.sh
  ├── utils/
  │   ├── validation.sh
  │   └── env.sh
  └── deploy.sh
  1. Add validation for required environment variables at the start:
#!/bin/bash
required_vars=(
  "VITE_PUBLIC_MASTER_ADDRESS"
  "VITE_PUBLIC_MASTER_PRIVATE_KEY"
  "VITE_PUBLIC_NODE_URL"
  "SOZO_WORLD"
)

for var in "${required_vars[@]}"; do
  if [ -z "${!var}" ]; then
    echo "Error: Required environment variable $var is not set"
    exit 1
  fi
done
  1. Add clear documentation of prerequisites and usage:
#!/bin/bash
# Eternum Deployment Script
#
# Prerequisites:
# - Node.js 16+
# - Scarb
# - Sozo
# - Slot CLI
#
# Usage:
#   ./deploy.sh [--skip-season-pass] [--skip-resources]
#
# Environment variables:
#   VITE_PUBLIC_MASTER_ADDRESS - Master account address
#   VITE_PUBLIC_MASTER_PRIVATE_KEY - Master account private key
#   ...

These changes would improve maintainability, reduce errors, and make the deployment process more robust.

client/src/ui/layouts/World.tsx (2)

187-204: Optimize subscription management for filtered structures

The current implementation makes three parallel subscription calls for each filtered structure. Consider:

  1. Batching the subscriptions to prevent overwhelming the system
  2. Implementing a queue system for large numbers of structures
  3. Adding proper cleanup for stale subscriptions
+ const BATCH_SIZE = 10; // Adjust based on performance testing
+ 
+ async function processBatch(structures: PlayerStructure[], startIdx: number) {
+   const batchStructures = structures.slice(startIdx, startIdx + BATCH_SIZE);
+   if (batchStructures.length === 0) return;
+ 
+   await Promise.all([
+     addToSubscriptionOneKeyModelbyRealmEntityId(
+       dojo.network.toriiClient,
+       dojo.network.contractComponents as any,
+       batchStructures.map(s => s.entity_id.toString()),
+       dojo.setup.db,
+     ),
+     addToSubscriptionTwoKeyModelbyRealmEntityId(
+       dojo.network.toriiClient,
+       dojo.network.contractComponents as any,
+       batchStructures.map(s => s.entity_id.toString()),
+       dojo.setup.db,
+     ),
+     addToSubscription(
+       dojo.network.toriiClient,
+       dojo.network.contractComponents as any,
+       batchStructures.map(s => s.entity_id.toString()),
+       dojo.setup.db,
+       batchStructures.map(s => ({ x: s.position.x, y: s.position.y })),
+     ),
+   ]);
+ }
+ 
  const fetch = async () => {
    try {
-     await Promise.all([...])
+     for (let i = 0; i < filteredStructures.length; i += BATCH_SIZE) {
+       await processBatch(filteredStructures, i);
+     }
    } catch (error) {
-     console.error("Fetch failed", error);
+     console.error("Batch processing failed:", error);
+     // Implement retry logic for failed batches
    }
  };

150-155: Add TypeScript types for network and component parameters

The contractComponents parameter is being cast to any in multiple places. Consider:

  1. Adding proper TypeScript types for network and component parameters
  2. Creating type definitions for the subscription functions
  3. Removing the as any casts
+ type ContractComponents = {
+   // Add appropriate type definitions
+ };
+ 
+ type SubscriptionParams = {
+   toriiClient: any; // Replace with proper type
+   contractComponents: ContractComponents;
+   entityIds: string[];
+   db: IDBDatabase;
+   positions?: Array<{ x: number; y: number }>;
+ };

- dojo.network.contractComponents as any
+ dojo.network.contractComponents as ContractComponents

Also applies to: 187-192, 221-221

client/src/dojo/worker.ts (2)

14-14: Consider fallback or error-handling logic for Worker creation.

In case the Worker initialization fails (e.g., in unsupported environments or due to a loading error), it would be beneficial to handle or log this scenario gracefully.


31-49: Consider separate approach for large event volumes.

Subscribing to both onEntityUpdated and onEventMessageUpdated might result in high message traffic for large-scale events. Evaluate using more fine-grained subscription scopes or chunked updates to prevent performance bottlenecks.

client/src/dojo/entityWorker.js (3)

1-1: Confirm tree-shakable import from lodash.

Using named imports from "lodash" can inflate bundle size if not carefully configured. Depending on the build, importing only the debounce function from "lodash/debounce" might reduce overhead.


3-5: Consider removing unused variable toriiClient or clarifying its usage.

Currently, toriiClient is declared but not used in this file. If it’s needed for future logic, comment or plan for its actual usage to avoid confusion.


8-14: Potential memory usage concern with large entityBatch.

If large amounts of data are posted frequently, entityBatch can accumulate quite a bit of data before debouncedSendBatch triggers. You may consider a threshold-based flush in addition to a time-based flush, to avoid memory bloat in the worker.

client/src/dojo/setup.ts (2)

160-160: Investigate partial sync logic for large datasets.

Repeated calls to getEntities with up to 40,000 records each may cause performance issues if the dataset is large. Consider implementing chunked queries or incremental sync strategies.

Also applies to: 177-177


180-190: Observe logging parameter for setupWorker.

The current call to setupWorker does not provide a logging parameter (defaults to false). If debugging is needed, you might want to pass logging: true. Alternatively, consider controlling logging via env variables for production.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6a47469 and 8d06296.

📒 Files selected for processing (7)
  • client/.env.preview (1 hunks)
  • client/src/dojo/entityWorker.js (1 hunks)
  • client/src/dojo/setup.ts (5 hunks)
  • client/src/dojo/worker.ts (1 hunks)
  • client/src/ui/layouts/World.tsx (2 hunks)
  • client/src/ui/modules/navigation/TopLeftNavigation.tsx (3 hunks)
  • scripts/deploy.sh (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • client/.env.preview
🔇 Additional comments (8)
client/src/ui/modules/navigation/TopLeftNavigation.tsx (1)

319-319: LGTM: Clean integration of ProgressBar component

The ProgressBar is well-positioned within the navigation selector and receives its progress prop correctly.

scripts/deploy.sh (1)

71-71: 🛠️ Refactor suggestion

Verify deployment version and improve error handling

The slot deployment command has been updated to use version v1.0.8. While the change is straightforward, there are several improvements that could be made:

  1. Let's verify if the version exists:

  2. Consider these improvements:

-slot deployments create -t epic sepolia-rc-19 torii --version v1.0.8 --world 0x3dc74e8caadbde233bb750a6608e095daba2891d9784ea0fb7fbf9988948c15 --rpc https://api.cartridge.gg/x/starknet/sepolia --indexing.pending true --config ./torii-sepolia.toml
+# Read world address from environment
+WORLD_ADDRESS="${WORLD_ADDRESS:-0x3dc74e8caadbde233bb750a6608e095daba2891d9784ea0fb7fbf9988948c15}"
+
+# Validate world address format
+if [[ ! $WORLD_ADDRESS =~ ^0x[a-fA-F0-9]+$ ]]; then
+    echo "Error: Invalid world address format"
+    exit 1
+fi
+
+# Create slot deployment with error handling
+if ! slot deployments create -t epic sepolia-rc-19 torii \
+    --version v1.0.8 \
+    --world "$WORLD_ADDRESS" \
+    --rpc https://api.cartridge.gg/x/starknet/sepolia \
+    --indexing.pending true \
+    --config ./torii-sepolia.toml; then
+    echo "Error: Failed to create slot deployment"
+    exit 1
+fi

These changes would:

  • Make the world address configurable via environment variable
  • Add validation for the world address format
  • Add proper error handling for the slot creation
  • Improve readability with multi-line formatting
client/src/ui/layouts/World.tsx (1)

217-230: 🛠️ Refactor suggestion

Add cleanup and status tracking for market subscription

The market subscription lacks cleanup logic and status tracking. Consider:

  1. Adding cleanup on component unmount
  2. Implementing subscription status tracking
  3. Adding retry logic for failed market subscriptions
  useEffect(() => {
+   let isSubscribed = true;
+   let retryCount = 0;
+   const MAX_RETRIES = 3;
+   
    const fetch = async () => {
      try {
        await addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, dojo.setup.db);
+       if (isSubscribed) {
+         setMarketLoading(false);
+       }
      } catch (error) {
        console.error("Market fetch failed", error);
+       if (isSubscribed && retryCount < MAX_RETRIES) {
+         retryCount++;
+         console.log(`Retrying market subscription (${retryCount}/${MAX_RETRIES})...`);
+         setTimeout(fetch, 1000 * retryCount);
+       }
      } finally {
-       setMarketLoading(false);
+       if (isSubscribed) {
+         setMarketLoading(false);
+       }
      }
    };

    fetch();
+   
+   return () => {
+     isSubscribed = false;
+     // Add subscription cleanup logic here
+   };
  }, [setMarketLoading]);

Likely invalid or redundant comment.

client/src/dojo/worker.ts (1)

25-29: Ensure resilience to malformed messages.

The current onmessage handler assumes that event.data contains updatedEntities in a valid format. In production, consider validating the shape of event.data for safer usage.

client/src/dojo/entityWorker.js (1)

16-23: Validate incoming message format.

The worker blindly assumes e.data to have the shape { type, entities, logging }. For better stability, add checks or fallback logic (e.g., ignoring unknown message types).

✅ Verification successful

Let me check how the worker is being used to better understand the message flow.


Add validation for message format and type

The worker is only used in one place (worker.ts) and always receives messages with the expected shape. However, it's still good practice to add validation:

self.onmessage = async (e) => {
  if (!e.data || typeof e.data !== 'object') return;
  
  const { type, entities, logging: logFlag } = e.data;
  if (type !== "update" || !entities?.fetchedEntities || !entities?.data) return;
  
  logging = logFlag;
  if (logging) console.log("Worker: Received entities update");
  handleUpdate(entities.fetchedEntities, entities.data);
};
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Checking for all usage of "postMessage" across the codebase to confirm it's always sending the expected shape:
rg -A 3 "postMessage\("

Length of output: 679


Script:

#!/bin/bash
# Check the worker initialization and usage
rg -A 5 "new Worker\(" 

# Check the full entityWorker.js file to understand the message handling
cat client/src/dojo/entityWorker.js

Length of output: 1269

client/src/dojo/setup.ts (3)

117-118: **** The same feedback about implementing try-catch for database operations was provided previously.


229-229: Refine typed interface for database usage.

Exposing the raw indexedDB object can lead to misuse. Consider returning a structured interface or utility methods for the calling code.


11-11: ⚠️ Potential issue

Add error handling for database operations.

In case openDatabase fails or is unavailable in the environment, add a try-catch to recover gracefully or alert the user. A fallback or user notification can mitigate silent failures.

-const indexedDB = await openDatabase();
-await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
+let indexedDB;
+try {
+  indexedDB = await openDatabase();
+  await syncEntitiesFromStorage(indexedDB, network.contractComponents as any);
+} catch (error) {
+  console.error("Failed to open/sync the database:", error);
+  // Provide fallback or user notification
+}

Likely invalid or redundant comment.

Comment on lines +150 to +155
addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[...filteredStructures.map((structure) => structure.entity_id.toString())],
[structureEntityId.toString()],
dojo.setup.db,
[{ x: position?.x || 0, y: position?.y || 0 }],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance error handling and prevent race conditions

The subscription calls could benefit from more robust error handling and race condition prevention:

  1. Consider implementing retry logic for failed subscriptions
  2. Add cleanup logic to handle subscription cancellation on component unmount
  3. Consider using a more atomic approach to managing subscription state
  const fetch = async () => {
+   const subscriptionPromises = [];
    try {
-     await Promise.all([
-       addToSubscription(
+     subscriptionPromises.push(addToSubscription(
          dojo.network.toriiClient,
          dojo.network.contractComponents as any,
          [structureEntityId.toString()],
          dojo.setup.db,
          [{ x: position?.x || 0, y: position?.y || 0 }],
-       ),
+     ));
-       addToSubscription(
+     subscriptionPromises.push(addToSubscription(
          dojo.network.toriiClient,
          dojo.network.contractComponents as any,
          [ADMIN_BANK_ENTITY_ID.toString()],
          dojo.setup.db,
-       ),
+     ));
+     
+     const results = await Promise.allSettled(subscriptionPromises);
+     results.forEach((result, index) => {
+       if (result.status === 'rejected') {
+         console.error(`Subscription ${index} failed:`, result.reason);
+         // Optionally trigger retry logic here
+       }
+     });
    } catch (error) {
-     console.error("Fetch failed", error);
+     console.error("Subscription setup failed:", error);
+     throw error; // Re-throw to allow parent error boundary to handle
    }
  };

Also applies to: 157-162

Comment on lines +17 to +22
const toriiClient = await createClient({
rpcUrl: config.rpcUrl,
toriiUrl: config.toriiUrl,
relayUrl: config.relayUrl,
worldAddress: config.worldAddress,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle potential errors from createClient.

Since createClient is an async call, it might reject on network or configuration issues. Wrapping it in a try-catch or providing error-handling will improve resilience.

-const toriiClient = await createClient({
-  rpcUrl: config.rpcUrl,
-  toriiUrl: config.toriiUrl,
-  relayUrl: config.relayUrl,
-  worldAddress: config.worldAddress,
-});
+let toriiClient;
+try {
+  toriiClient = await createClient({
+    rpcUrl: config.rpcUrl,
+    toriiUrl: config.toriiUrl,
+    relayUrl: config.relayUrl,
+    worldAddress: config.worldAddress,
+  });
+} catch (error) {
+  console.error("Failed to create torii client:", error);
+  // Possible fallback or user notification
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const toriiClient = await createClient({
rpcUrl: config.rpcUrl,
toriiUrl: config.toriiUrl,
relayUrl: config.relayUrl,
worldAddress: config.worldAddress,
});
let toriiClient;
try {
toriiClient = await createClient({
rpcUrl: config.rpcUrl,
toriiUrl: config.toriiUrl,
relayUrl: config.relayUrl,
worldAddress: config.worldAddress,
});
} catch (error) {
console.error("Failed to create torii client:", error);
// Possible fallback or user notification
}

Comment on lines +51 to +56
return {
cancel: () => {
worker.terminate();
},
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Memory leak: Subscriptions not being cleaned up

The review comment is correct. Looking at the implementation in worker.ts, the setupWorker function creates two subscriptions using toriiClient.onEntityUpdated and toriiClient.onEventMessageUpdated, but the returned cancel method only terminates the worker without cleaning up these subscriptions. This will lead to memory leaks as the subscriptions will continue to exist even after the worker is terminated.

  • The subscriptions should be stored and cleaned up in the cancel method:
  const entitySub = await toriiClient.onEntityUpdated(...);
  const eventSub = await toriiClient.onEventMessageUpdated(...);
  
  return {
    cancel: () => {
      worker.terminate();
      entitySub.unsubscribe(); // Add this
      eventSub.unsubscribe();  // Add this
    },
  };
🔗 Analysis chain

Remember to cancel the subscriptions.

The returned object’s cancel method terminates the worker, but the toriiClient subscriptions remain active. Consider storing and canceling subscriptions as well to fully clean up resources.


Let me check the implementation of the worker setup to understand how these subscriptions are handled.


Let me try a different approach to check the worker implementation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify references to onEntityUpdated/onEventMessageUpdated subscriptions to see if they are being tracked for cancellation elsewhere.
rg -A 3 "onEntityUpdated|onEventMessageUpdated"

Length of output: 1716


Script:

#!/bin/bash
# Check the full implementation of the worker setup
ast-grep --pattern 'export function setupWorker($_) {
  $$$
}'

Length of output: 64


Script:

#!/bin/bash
# Check the content of the worker file
cat client/src/dojo/worker.ts

Length of output: 1829

Comment on lines +5 to +11
export async function setupWorker(
config: { rpcUrl: string; toriiUrl: string; relayUrl: string; worldAddress: string },
components: Component<Schema, Metadata, undefined>[],
entityKeyClause: EntityKeysClause[],
historical: boolean,
logging = false,
) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Based on the code analysis, I can now provide the final response:

Add runtime validation for config object properties

The setupWorker function passes the config directly to createClient without any validation. While TypeScript provides compile-time type checking, adding runtime validation would prevent potential issues in production:

  • The function should validate that all required properties (rpcUrl, toriiUrl, relayUrl, worldAddress) are non-empty strings before creating the client
  • This is particularly important as these values are used to establish critical connections for the worker setup

Example locations needing validation:

  • client/src/dojo/worker.ts:5-11: Add validation before the createClient call
🔗 Analysis chain

Validate configuration object structure.

The function expects config properties: rpcUrl, toriiUrl, relayUrl, and worldAddress. Consider adding runtime checks or TypeScript type validations to ensure all properties are provided and correctly typed. This helps prevent runtime errors if any configuration field is missing or malformed.


Let me check the implementation of setupWorker to see if there are any runtime validations already in place.


Let me try a different approach to check the implementation.


Let me check how the config is used in the worker implementation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Search for other references to `setupWorker` calls to verify that the `config` passed in meets the expected shape.
rg -A 5 "setupWorker\("

Length of output: 786


Script:

#!/bin/bash
# Check the implementation of setupWorker for any existing validation
ast-grep --pattern 'export async function setupWorker($_) {
  $$$
}'

Length of output: 70


Script:

#!/bin/bash
# Get the content of the worker.ts file to check the implementation
cat client/src/dojo/worker.ts

Length of output: 1829


Script:

#!/bin/bash
# Check the content of entityWorker.js to see how config is handled
cat client/src/dojo/entityWorker.js

Length of output: 848

Comment on lines +25 to +31
function handleUpdate(fetchedEntities, data) {
entityBatch[fetchedEntities] = {
...entityBatch[fetchedEntities],
...data,
};
debouncedSendBatch();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ensure uniqueness of dictionary key for fetchedEntities.

Using fetchedEntities directly as a property name can be risky if the object is not a string or has unexpected formatting. Consider converting it into a string or generating a unique key to avoid collisions or unexpected indexing.

-function handleUpdate(fetchedEntities, data) {
-  entityBatch[fetchedEntities] = {
-    ...entityBatch[fetchedEntities],
-    ...data,
-  };
+function handleUpdate(fetchedEntities, data) {
+  const entityKey = String(fetchedEntities);
+  entityBatch[entityKey] = {
+    ...entityBatch[entityKey],
+    ...data,
+  };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function handleUpdate(fetchedEntities, data) {
entityBatch[fetchedEntities] = {
...entityBatch[fetchedEntities],
...data,
};
debouncedSendBatch();
}
function handleUpdate(fetchedEntities, data) {
const entityKey = String(fetchedEntities);
entityBatch[entityKey] = {
...entityBatch[entityKey],
...data,
};
debouncedSendBatch();
}

};

fetch();
}, [structureEntityId, subscriptions, setWorldLoading, setSubscriptions]);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove subscriptions from here

}, [filteredStructures, setWorldLoading, setSubscriptions]);

// Handle market subscription
useEffect(() => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setMarketLoading(true)

),
addToSubscriptionTwoKeyModelbyRealmEntityId(
addToSubscription(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you move this to the same use effect as the market, this should 100% only run once and is not dependent to structureEntityId

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
client/src/dojo/setup.ts (2)

145-146: ⚠️ Potential issue

Add error handling for database operations

The database initialization lacks error handling which could lead to silent failures. Consider wrapping the operations in try-catch blocks and implementing a cleanup mechanism.


259-259: 🛠️ Refactor suggestion

Improve type safety and encapsulation of the database instance

Directly exposing the database instance could lead to misuse. Consider creating a typed interface for database operations.

🧹 Nitpick comments (3)
client/src/dojo/setup.ts (3)

20-72: Remove commented out code

Instead of keeping commented out code, consider removing it entirely. The old implementation can always be retrieved from the Git history if needed. This will improve code maintainability and readability.


155-155: Extract magic number to a named constant

The timeout value 40_000 should be extracted to a named constant for better maintainability and documentation.

+const ENTITY_QUERY_TIMEOUT = 40_000;
...
-      40_000,
+      ENTITY_QUERY_TIMEOUT,

Line range hint 74-261: Consider splitting setup into smaller, focused modules

The setup function has grown to handle multiple concerns: network setup, database initialization, entity synchronization, and worker setup. Consider splitting these into separate modules for better maintainability and testing:

  1. setupDatabase.ts - Database initialization and configuration
  2. setupSync.ts - Entity synchronization and worker setup
  3. setupNetwork.ts - Network configuration (already exists)

This would make the code more modular and easier to maintain.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8d06296 and 7a7b60d.

📒 Files selected for processing (1)
  • client/src/dojo/setup.ts (3 hunks)

Comment on lines 161 to 207
// getEntities(
// network.toriiClient,
// {
// Keys: {
// keys: [undefined],
// pattern_matching: "FixedLen",
// models: [
// "s0_eternum-AddressName",
// "s0_eternum-Realm",
// "s0_eternum-PopulationConfig",
// "s0_eternum-CapacityConfig",
// "s0_eternum-ProductionConfig",
// "s0_eternum-RealmLevelConfig",
// "s0_eternum-BankConfig",
// "s0_eternum-Bank",
// "s0_eternum-Trade",
// "s0_eternum-Army",
// "s0_eternum-Structure",
// "s0_eternum-Battle",
// "s0_eternum-EntityOwner",
// ],
// },
// },
// network.contractComponents as any,
// [],
// [],
// 40_000,
// false,
// { dbConnection: indexedDB, timestampCacheKey: "single_keyed_query" },
// ),

// getEntities(
// network.toriiClient,
// {
// Keys: {
// keys: [undefined, undefined],
// pattern_matching: "FixedLen",
// models: ["s0_eternum-CapacityConfigCategory", "s0_eternum-ResourceCost"],
// },
// },
// network.contractComponents as any,
// [],
// [],
// 40_000,
// false,
// { dbConnection: indexedDB, timestampCacheKey: "double_keyed_query" },
// )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Based on the evidence gathered, I can now provide a final response:

Remove commented entity queries

These queries appear to be obsolete as:

  • The queries and their associated timestamp cache keys (single_keyed_query and double_keyed_query) are not used anywhere else in the codebase
  • While the same models are referenced in active code within the same file, they are queried in a different structure, suggesting these commented queries represent an old/obsolete querying pattern
  • There's no explanatory TODO comment indicating they are temporarily disabled

The commented code should be removed to maintain code cleanliness.

🔗 Analysis chain

Verify if commented queries are still needed

There are two commented out entity queries. If these are no longer needed, they should be removed. If they are needed but temporarily disabled, add a TODO comment explaining why they are commented out and when they should be re-enabled.


Let me check if these queries are used in a different form or if there are similar patterns being used actively in the codebase.


Let me check one more thing to see if these models are being queried in a different way or if there's any recent change history around these queries.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if these queries are used elsewhere in the codebase
rg "single_keyed_query|double_keyed_query" --type ts

Length of output: 262


Script:

#!/bin/bash
# Check for similar query patterns and entity models being used
rg "getEntities.*network\.toriiClient" -A 5 --type ts

# Also check for these specific models being queried elsewhere
rg "s0_eternum-(AddressName|Realm|PopulationConfig|CapacityConfig|ProductionConfig|RealmLevelConfig|BankConfig|Bank|Trade|Army|Structure|Battle|EntityOwner)" --type ts

Length of output: 2389


Script:

#!/bin/bash
# Look for the active usage of these models in the codebase
rg "Keys.*pattern_matching.*FixedLen" -B 5 -A 5 --type ts

# Check recent git history for these files
git log -n 5 -- client/src/dojo/setup.ts

Length of output: 59

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (2)
client/.env.preview (2)

Line range hint 1-2: ⚠️ Critical Security Risk: Exposed Private Keys

The master address and private key should not be committed to version control, even in preview environments. This poses a significant security risk.

Consider:

  1. Using environment-specific key management services
  2. Moving sensitive credentials to a secure secret management system
  3. Adding these lines to .gitignore
🧰 Tools
🪛 Gitleaks (8.21.2)

5-5: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


Line range hint 26-30: Verify placeholder URLs

Multiple URLs are set to placeholder values (https://abc.com). Please ensure these are updated with the correct production URLs before deploying to mainnet:

  • VITE_PUBLIC_ARK_MARKETPLACE_API
  • VITE_PUBLIC_IMAGE_CDN_URL
  • VITE_PUBLIC_IMAGE_PROXY_URL
  • VITE_PUBLIC_IPFS_GATEWAY
🧰 Tools
🪛 Gitleaks (8.21.2)

5-5: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

♻️ Duplicate comments (1)
client/src/dojo/entityWorker.js (1)

24-30: ⚠️ Potential issue

Add input validation and prevent prototype pollution.

The update handler needs several improvements:

  1. Validate input parameters
  2. Use a safer method for object property access
  3. Ensure key uniqueness (as previously mentioned)
 function handleUpdate(fetchedEntities, data) {
+  if (!fetchedEntities || !data) {
+    throw new Error('Missing required parameters');
+  }
+
+  const entityKey = String(fetchedEntities);
+
+  // Use Object.create(null) to prevent prototype pollution
+  if (!entityBatch[entityKey]) {
+    entityBatch[entityKey] = Object.create(null);
+  }
+
-  entityBatch[fetchedEntities] = {
-    ...entityBatch[fetchedEntities],
+  entityBatch[entityKey] = {
+    ...entityBatch[entityKey],
     ...data,
   };
+
   debouncedSendBatch();
 }
🧹 Nitpick comments (2)
client/src/dojo/indexedDB.ts (1)

8-31: Ensure database versioning strategy is robust.
Currently, the database is upgraded only when the DB_VERSION is incremented. If future migrations require changes to the “entities” schema, you may need more sophisticated logic within “onupgradeneeded” to handle schema evolution (e.g., splitting data into multiple stores, adding indexes). Consider adding checks or versioning strategies to allow for controlled migrations.

client/src/dojo/entityWorker.js (1)

1-5: Consider improving global state management.

The current implementation uses mutable global variables which could lead to unexpected behavior in a worker context. Consider these improvements:

  1. Make DEBOUNCE_DELAY configurable via worker initialization message
  2. Consider using a more structured state management approach
 import { debounce } from "lodash";

-let entityBatch = {};
-let logging = false;
-const DEBOUNCE_DELAY = 1000;
+class EntityWorkerState {
+  constructor(config = {}) {
+    this.entityBatch = {};
+    this.logging = false;
+    this.debounceDelay = config.debounceDelay || 1000;
+  }
+}
+
+const state = new EntityWorkerState();
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7a7b60d and 05a4b7e.

📒 Files selected for processing (3)
  • client/.env.preview (1 hunks)
  • client/src/dojo/entityWorker.js (1 hunks)
  • client/src/dojo/indexedDB.ts (1 hunks)
🔇 Additional comments (3)
client/src/dojo/indexedDB.ts (1)

70-104: Consolidate transaction-level error handling to capture partial successes.

This mirrors a past review comment: partial successes can be tricky in large batch inserts, where one record fails while others succeed. It may be beneficial to track which IDs fail to persist and return this information to upper layers, offering more transparent error reporting and improved debugging.

client/.env.preview (2)

Line range hint 32-33: Verify fee recipient configuration

Since this is a mainnet configuration, please ensure the VRF provider and client fee recipient addresses are correct as these will handle real transactions.

✅ Verification successful

Let me gather more context about these addresses by checking their usage and configuration patterns.


Let me try one more search to get the surrounding context of these addresses with a different approach.


Fee recipient and VRF provider addresses appear to be correctly configured

Based on the verification results:

  • The fee recipient address (0x045c...) is consistently used as a burner address across the SDK constants and bridge configuration
  • The VRF provider address (0x051f...) is properly configured in the starknet provider context for random number generation
  • Both addresses are used in appropriate contexts with clear documentation and purpose
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for fee recipient address usage

# Look for fee recipient address references
echo "Checking fee recipient usage..."
rg "0x045c587318c9ebcf2fbe21febf288ee2e3597a21cd48676005a5770a50d433c5"

# Look for VRF provider address references
echo "Checking VRF provider usage..."
rg "0x051fea4450da9d6aee758bdeba88b2f665bcbf549d2c61421aa724e9ac0ced8f"

Length of output: 695


Script:

#!/bin/bash
# Check for any configuration files and environment files that might contain these addresses
echo "Checking configuration files..."
fd -e json -e env -e yaml -e toml -X grep -l "0x045c587318c9ebcf2fbe21febf288ee2e3597a21cd48676005a5770a50d433c5\|0x051fea4450da9d6aee758bdeba88b2f665bcbf549d2c61421aa724e9ac0ced8f"

# Check the context around these addresses in the found files
echo -e "\nChecking context in source files..."
rg -B 2 -A 2 "0x045c587318c9ebcf2fbe21febf288ee2e3597a21cd48676005a5770a50d433c5\|0x051fea4450da9d6aee758bdeba88b2f665bcbf549d2c61421aa724e9ac0ced8f"

Length of output: 495


Script:

#!/bin/bash
# Check the context around the fee recipient in the SDK constants
echo "Checking SDK constants context..."
rg -B 5 -A 5 "VELORDS_FEE_RECIPIENT" sdk/packages/eternum/src/constants/global.ts

# Check the context of the VRF provider in the starknet provider file
echo -e "\nChecking Starknet provider context..."
rg -B 5 -A 5 "0x051fea4450da9d6aee758bdeba88b2f665bcbf549d2c61421aa724e9ac0ced8f" client/src/hooks/context/starknet-provider.tsx

# Check config context
echo -e "\nChecking config context..."
rg -B 5 -A 5 "velords_fee_recipient" config/index.ts

Length of output: 2473

🧰 Tools
🪛 Gitleaks (8.21.2)

5-5: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


6-7: Verify mainnet configuration consistency

The transition to mainnet involves multiple related configuration changes. Please ensure all components are properly configured for mainnet:

  • TORII endpoint: mainnet-cache-2
  • Node URL: mainnet
  • TORII relay: mainnet-cache-2
  • Chain and slot: mainnet

Also applies to: 12-12, 17-18

✅ Verification successful

Let me try a different approach to verify the configuration consistency.


Configuration appears consistent for mainnet deployment

Based on the codebase analysis, the mainnet configuration is consistently implemented across the relevant files:

  • The TORII endpoint (mainnet-cache-2) and Node URL (mainnet) in .env.preview align with the mainnet configuration used throughout the codebase
  • Chain configuration is properly handled through environment variables and correctly switches between mainnet and other environments
  • The Starknet provider is properly configured to use mainnet chains when VITE_PUBLIC_CHAIN === "mainnet"
  • No inconsistent or conflicting sepolia configurations remain in the production paths
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify consistent mainnet configuration across the codebase

# Check for any remaining sepolia references
echo "Checking for any remaining sepolia references..."
rg -i "sepolia" --type ts --type tsx --type json

# Check for consistent mainnet-cache-2 usage
echo "Checking mainnet-cache-2 configuration consistency..."
rg -i "mainnet-cache-[0-9]+" --type ts --type tsx --type json

# Look for any environment loading patterns
echo "Checking environment variable usage..."
ast-grep --pattern 'process.env.VITE_PUBLIC_CHAIN' 
ast-grep --pattern 'process.env.VITE_PUBLIC_TORII'

Length of output: 581


Script:

#!/bin/bash
# Search for environment files and their content
echo "Checking all .env files..."
fd -e env -x cat {}

# Search for configuration related terms
echo "\nChecking configuration patterns..."
rg -i "(mainnet|sepolia|goerli|testnet)" --type=ts --type=js --type=json

# Search for URL patterns in the codebase
echo "\nChecking URL patterns..."
rg -i "(api\.cartridge\.gg|torii)" --type=ts --type=js --type=json

Length of output: 12319

Comment on lines +37 to +58
return new Promise((resolve, reject) => {
const transaction = dbConnection.transaction(["entities"], "readonly");
const store = transaction.objectStore("entities");
const request = store.getAll();

request.onsuccess = () => {
const entities = request.result;

const CHUNK_SIZE = 50000;

// Process entities in chunks
for (let i = 0; i < entities.length; i += CHUNK_SIZE) {
const chunk = entities.slice(i, i + CHUNK_SIZE);
const chunkMap: Entities = {};

for (const entity of chunk) {
const { id, ...data } = entity;
chunkMap[id] = data;
}

setEntities(chunkMap, components, false);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider streaming or asynchronous iteration for large datasets.
Processing 50,000 entries per chunk may be memory-intensive for large databases. If the dataset grows significantly, consider streaming or an asynchronous iteration approach (e.g., openCursor) to manage memory usage more gracefully.

Comment on lines +106 to +114
async function clearCache() {
Object.keys(localStorage)
.filter((x) => x.endsWith("_query"))
.forEach((x) => localStorage.removeItem(x));

indexedDB.deleteDatabase(DB_NAME);
location.reload();
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reevaluate forcing page reload after cache removal.
Calling “location.reload()” may disrupt users if done unexpectedly (for instance, if triggered by a background or user-driven action). Depending on your application’s workflow, consider a more graceful approach—such as notifying the user that a reload is needed, or elegantly reinitializing the UI after clearing the cache.

Comment on lines +7 to +13
let debouncedSendBatch = debounce(() => {
if (Object.keys(entityBatch).length > 0) {
console.log("Worker: Sending batch", entityBatch);
self.postMessage({ updatedEntities: entityBatch });
entityBatch = {};
}
}, DEBOUNCE_DELAY);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling and consistent logging.

The batch sending logic needs improvement in error handling and logging consistency:

  1. Console.log should respect the logging flag
  2. Add error handling for postMessage
  3. Consider using const for the debounced function
-let debouncedSendBatch = debounce(() => {
+const debouncedSendBatch = debounce(() => {
   if (Object.keys(entityBatch).length > 0) {
-    console.log("Worker: Sending batch", entityBatch);
+    if (logging) console.log("Worker: Sending batch", entityBatch);
+    try {
       self.postMessage({ updatedEntities: entityBatch });
       entityBatch = {};
+    } catch (error) {
+      if (logging) console.error("Worker: Failed to send batch", error);
+    }
   }
 }, DEBOUNCE_DELAY);

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +15 to +22
self.onmessage = async (e) => {
const { type, entities, logging: logFlag } = e.data;
if (type === "update") {
logging = logFlag;
if (logging) console.log("Worker: Received entities update");
handleUpdate(entities.fetchedEntities, entities.data);
}
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add input validation and error handling for messages.

The message handler needs proper validation and error handling:

  1. Remove unnecessary async since there are no await operations
  2. Add validation for required message properties
  3. Add error handling for malformed messages
-self.onmessage = async (e) => {
+self.onmessage = (e) => {
+  try {
+    if (!e.data || typeof e.data !== 'object') {
+      throw new Error('Invalid message format');
+    }
+
     const { type, entities, logging: logFlag } = e.data;
+
+    if (!type || !entities) {
+      throw new Error('Missing required message properties');
+    }
+
     if (type === "update") {
       logging = logFlag;
       if (logging) console.log("Worker: Received entities update");
       handleUpdate(entities.fetchedEntities, entities.data);
     }
+  } catch (error) {
+    if (logging) console.error("Worker: Message processing error", error);
+    self.postMessage({ error: error.message });
+  }
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.onmessage = async (e) => {
const { type, entities, logging: logFlag } = e.data;
if (type === "update") {
logging = logFlag;
if (logging) console.log("Worker: Received entities update");
handleUpdate(entities.fetchedEntities, entities.data);
}
};
self.onmessage = (e) => {
try {
if (!e.data || typeof e.data !== 'object') {
throw new Error('Invalid message format');
}
const { type, entities, logging: logFlag } = e.data;
if (!type || !entities) {
throw new Error('Missing required message properties');
}
if (type === "update") {
logging = logFlag;
if (logging) console.log("Worker: Received entities update");
handleUpdate(entities.fetchedEntities, entities.data);
}
} catch (error) {
if (logging) console.error("Worker: Message processing error", error);
self.postMessage({ error: error.message });
}
};

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
client/src/ui/layouts/World.tsx (1)

206-220: 🛠️ Refactor suggestion

Add cleanup and improve market subscription handling

The market subscription needs similar improvements for proper cleanup and error handling.

Consider this improved implementation:

  useEffect(() => {
+   let isSubscribed = true;
+   setMarketLoading(true);  // Moved outside async function as per previous review

    const fetch = async () => {
-     setMarketLoading(true);
      try {
-       await addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, dojo.setup.db);
+       const result = await addMarketSubscription(
+         dojo.network.toriiClient,
+         dojo.network.contractComponents as any,
+         dojo.setup.db
+       );
+       if (!isSubscribed) return;
      } catch (error) {
        console.error("Market fetch failed", error);
      } finally {
-       setMarketLoading(false);
+       if (isSubscribed) {
+         setMarketLoading(false);
+       }
      }
    };

    fetch();
+   
+   return () => {
+     isSubscribed = false;
+   };
  }, []);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 05a4b7e and 2b97511.

📒 Files selected for processing (1)
  • client/src/ui/layouts/World.tsx (3 hunks)
🔇 Additional comments (2)
client/src/ui/layouts/World.tsx (2)

Line range hint 119-167: Enhance subscription management and error handling

The subscription management could be improved in several ways:

  1. Missing cleanup function to handle component unmount
  2. Basic error handling without retry mechanism
  3. Potential race condition with multiple setState calls

Consider implementing this improved version:

  useEffect(() => {
    if (
      !structureEntityId ||
      subscriptions[structureEntityId.toString()] ||
      subscriptions[ADMIN_BANK_ENTITY_ID.toString()] ||
      structureEntityId === 999999999
    ) {
      return;
    }

+   let isSubscribed = true;
    const position = getComponentValue(
      dojo.setup.components.Position,
      getEntityIdFromKeys([BigInt(structureEntityId)]),
    );

    setWorldLoading(true);
-   setSubscriptions((prev) => ({
-     ...prev,
-     [structureEntityId.toString()]: true,
-     [ADMIN_BANK_ENTITY_ID.toString()]: true,
-   }));

    const fetch = async () => {
+     const subscriptionPromises = [];
      try {
-       await Promise.all([
-         addToSubscription(
+       subscriptionPromises.push(addToSubscription(
          dojo.network.toriiClient,
          dojo.network.contractComponents as any,
          [structureEntityId.toString()],
          dojo.setup.db,
          [{ x: position?.x || 0, y: position?.y || 0 }],
-         ),
+       ));

-         addToSubscription(
+       subscriptionPromises.push(addToSubscription(
          dojo.network.toriiClient,
          dojo.network.contractComponents as any,
          [ADMIN_BANK_ENTITY_ID.toString()],
          dojo.setup.db,
-         ),
+       ));

+       const results = await Promise.allSettled(subscriptionPromises);
+       
+       if (isSubscribed) {
+         const successfulSubscriptions = {};
+         results.forEach((result, index) => {
+           if (result.status === 'fulfilled') {
+             const entityId = index === 0 ? structureEntityId.toString() : ADMIN_BANK_ENTITY_ID.toString();
+             successfulSubscriptions[entityId] = true;
+           } else {
+             console.error(`Subscription ${index} failed:`, result.reason);
+           }
+         });
+         
+         setSubscriptions(prev => ({
+           ...prev,
+           ...successfulSubscriptions
+         }));
+       }
      } catch (error) {
        console.error("Fetch failed", error);
      } finally {
-       setWorldLoading(false);
+       if (isSubscribed) {
+         setWorldLoading(false);
+       }
      }
    };

    fetch();
+   
+   return () => {
+     isSubscribed = false;
+   };
  }, [structureEntityId, subscriptions]);

6-6: Verify consistent subscription implementation across components

Let's verify that the subscription pattern is consistently implemented across all components using these functions.

Comment on lines +169 to +204
// Handle filtered structures subscription
useEffect(() => {
if (filteredStructures.length === 0) return;

setWorldLoading(true);
setSubscriptions((prev) => ({
...prev,
...Object.fromEntries(filteredStructures.map((structure) => [structure.entity_id.toString(), true])),
}));

const fetch = async () => {
try {
await Promise.all([
addToSubscriptionOneKeyModelbyRealmEntityId(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[structureEntityId.toString()],
[{ x: position?.x || 0, y: position?.y || 0 }],
filteredStructures.map((structure) => structure.entity_id.toString()),
dojo.setup.db,
),
addToSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, [
ADMIN_BANK_ENTITY_ID.toString(),
]),
addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[...filteredStructures.map((structure) => structure.entity_id.toString())],
[...filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y }))],
filteredStructures.map((structure) => structure.entity_id.toString()),
dojo.setup.db,
filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y })),
),
addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any),
]);
} catch (error) {
console.error("Fetch failed", error);
} finally {
setWorldLoading(false);
}
};

fetch();
}, [filteredStructures]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consolidate subscription logic and prevent memory leaks

The filtered structures subscription has similar issues to the initial subscription and introduces additional complexity with duplicate subscription calls.

Consider consolidating the subscription logic and implementing proper cleanup:

  useEffect(() => {
    if (filteredStructures.length === 0) return;

+   let isSubscribed = true;
    setWorldLoading(true);
-   setSubscriptions((prev) => ({
-     ...prev,
-     ...Object.fromEntries(filteredStructures.map((structure) => [structure.entity_id.toString(), true])),
-   }));

    const fetch = async () => {
+     const entityIds = filteredStructures.map(structure => structure.entity_id.toString());
+     const positions = filteredStructures.map(structure => ({ 
+       x: structure.position.x, 
+       y: structure.position.y 
+     }));

      try {
-       await Promise.all([
-         addToSubscriptionOneKeyModelbyRealmEntityId(
+       const results = await Promise.allSettled([
+         addToSubscriptionOneKeyModelbyRealmEntityId(
            dojo.network.toriiClient,
            dojo.network.contractComponents as any,
-           filteredStructures.map((structure) => structure.entity_id.toString()),
+           entityIds,
            dojo.setup.db,
          ),
-         addToSubscription(
+         addToSubscription(
            dojo.network.toriiClient,
            dojo.network.contractComponents as any,
-           filteredStructures.map((structure) => structure.entity_id.toString()),
+           entityIds,
            dojo.setup.db,
-           filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y })),
+           positions,
          ),
        ]);

+       if (isSubscribed) {
+         const successfulSubscriptions = {};
+         results.forEach((result, index) => {
+           if (result.status === 'fulfilled') {
+             entityIds.forEach(id => {
+               successfulSubscriptions[id] = true;
+             });
+           } else {
+             console.error(`Batch subscription ${index} failed:`, result.reason);
+           }
+         });
+         
+         setSubscriptions(prev => ({
+           ...prev,
+           ...successfulSubscriptions
+         }));
+       }
      } catch (error) {
        console.error("Fetch failed", error);
      } finally {
-       setWorldLoading(false);
+       if (isSubscribed) {
+         setWorldLoading(false);
+       }
      }
    };

    fetch();
+   
+   return () => {
+     isSubscribed = false;
+   };
  }, [filteredStructures]);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Handle filtered structures subscription
useEffect(() => {
if (filteredStructures.length === 0) return;
setWorldLoading(true);
setSubscriptions((prev) => ({
...prev,
...Object.fromEntries(filteredStructures.map((structure) => [structure.entity_id.toString(), true])),
}));
const fetch = async () => {
try {
await Promise.all([
addToSubscriptionOneKeyModelbyRealmEntityId(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[structureEntityId.toString()],
[{ x: position?.x || 0, y: position?.y || 0 }],
filteredStructures.map((structure) => structure.entity_id.toString()),
dojo.setup.db,
),
addToSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any, [
ADMIN_BANK_ENTITY_ID.toString(),
]),
addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
[...filteredStructures.map((structure) => structure.entity_id.toString())],
[...filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y }))],
filteredStructures.map((structure) => structure.entity_id.toString()),
dojo.setup.db,
filteredStructures.map((structure) => ({ x: structure.position.x, y: structure.position.y })),
),
addMarketSubscription(dojo.network.toriiClient, dojo.network.contractComponents as any),
]);
} catch (error) {
console.error("Fetch failed", error);
} finally {
setWorldLoading(false);
}
};
fetch();
}, [filteredStructures]);
// Handle filtered structures subscription
useEffect(() => {
if (filteredStructures.length === 0) return;
let isSubscribed = true;
setWorldLoading(true);
const fetch = async () => {
const entityIds = filteredStructures.map(structure => structure.entity_id.toString());
const positions = filteredStructures.map(structure => ({
x: structure.position.x,
y: structure.position.y
}));
try {
const results = await Promise.allSettled([
addToSubscriptionOneKeyModelbyRealmEntityId(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
entityIds,
dojo.setup.db,
),
addToSubscription(
dojo.network.toriiClient,
dojo.network.contractComponents as any,
entityIds,
dojo.setup.db,
positions,
),
]);
if (isSubscribed) {
const successfulSubscriptions = {};
results.forEach((result, index) => {
if (result.status === 'fulfilled') {
entityIds.forEach(id => {
successfulSubscriptions[id] = true;
});
} else {
console.error(`Batch subscription ${index} failed:`, result.reason);
}
});
setSubscriptions(prev => ({
...prev,
...successfulSubscriptions
}));
}
} catch (error) {
console.error("Fetch failed", error);
} finally {
if (isSubscribed) {
setWorldLoading(false);
}
}
};
fetch();
return () => {
isSubscribed = false;
};
}, [filteredStructures]);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants