-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: abci state sync #2413
base: v1.8-dev
Are you sure you want to change the base?
feat: abci state sync #2413
Conversation
WalkthroughThe pull request introduces comprehensive state synchronization capabilities to the Dash Drive blockchain platform. It adds new configuration structures, error handling mechanisms, and application traits to support snapshot management. The changes span multiple packages, including Changes
Sequence DiagramsequenceDiagram
participant Client
participant StateSourceApp
participant SnapshotManager
participant GroveDB
Client->>StateSourceApp: list_snapshots()
StateSourceApp->>SnapshotManager: get_snapshots()
SnapshotManager->>GroveDB: Retrieve snapshots
GroveDB-->>SnapshotManager: Return snapshot list
SnapshotManager-->>StateSourceApp: Return snapshots
StateSourceApp-->>Client: Respond with snapshots
Client->>StateSourceApp: offer_snapshot()
StateSourceApp->>SnapshotManager: Create new snapshot session
SnapshotManager->>GroveDB: Prepare for state sync
Client->>StateSourceApp: apply_snapshot_chunk()
StateSourceApp->>SnapshotManager: Apply chunk
SnapshotManager->>GroveDB: Update state
Poem
Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🧹 Nitpick comments (19)
packages/rs-drive-abci/src/abci/handler/load_snapshot_chunk.rs (2)
27-30
: Include detailed error information when snapshot retrieval failsCurrently, the error handling when retrieving the snapshot at the specified height discards the original error information. Including the underlying error message can provide more context for debugging.
Apply this diff to include the original error details:
- .map_err(|_| AbciError::StateSyncInternalError("load_snapshot_chunk failed".to_string()))? + .map_err(|e| AbciError::StateSyncInternalError(format!("load_snapshot_chunk failed: {}", e)))?
29-30
: Differentiate between not found and other errorsWhen the snapshot is not found, the code returns a generic
StateSyncInternalError
. It would be more appropriate to return an error indicating that the snapshot was not found to distinguish it from other internal errors.Apply this diff to specify a not found error:
.ok_or_else(|| { - AbciError::StateSyncInternalError("load_snapshot_chunk failed".to_string()) + AbciError::StateSyncBadRequest("Snapshot not found at the given height".to_string()) })?;packages/rs-drive-abci/src/abci/app/state_source.rs (1)
86-89
: Include error details in gRPCStatus
for better debuggingWhen mapping errors to
tonic::Status
, consider including more detailed error messages to aid in debugging.Apply this diff to include the error source:
.map_err(|e| tonic::Status::internal(format!("list_snapshots failed: {:?}", e)))Repeat this pattern for other error mappings in the file.
packages/rs-drive-abci/src/abci/handler/offer_snapshot.rs (3)
16-18
: Avoid potential panic when convertingapp_hash
The
try_into()
method can fail if theapp_hash
does not have the expected length. Currently, the error handling uses a generic error message. Include specific details about the expected and actual lengths to improve error clarity.Apply this diff to provide detailed error information:
let request_app_hash: [u8; 32] = request.app_hash.try_into().map_err(|e| { - AbciError::StateSyncBadRequest("offer_snapshot invalid app_hash length".to_string()) + AbciError::StateSyncBadRequest(format!( + "offer_snapshot invalid app_hash length: expected 32 bytes, got {} bytes", + request.app_hash.len() + )) })?;
75-79
: Consolidate duplicate error handling logicThe error handling code for wiping the GroveDB is duplicated in both branches of the conditional. Consider refactoring this into a separate function or closure to reduce code duplication and improve maintainability.
Extract the duplicated code into a helper function:
fn wipe_grovedb(app: &A) -> Result<(), AbciError> { app.platform().drive.grove.wipe().map_err(|e| { AbciError::StateSyncInternalError(format!( "offer_snapshot unable to wipe grovedb: {}", e )) }) }Then, replace the duplicated calls with:
wipe_grovedb(app)?;
69-72
: Clarify error message when a newer snapshot is already being syncedThe error message "offer_snapshot already syncing newest height" might be unclear to users. Consider rephrasing it to more clearly indicate that the offered snapshot is not newer than the current one.
Apply this diff to improve the error message:
return Err(Error::Abci(AbciError::StateSyncBadRequest( - "offer_snapshot already syncing newest height".to_string(), + "Offered snapshot height is not newer than the current syncing snapshot".to_string(), )));packages/rs-drive-abci/src/abci/handler/apply_snapshot_chunk.rs (3)
76-76
: Replaceprintln!
with proper logging macrosUsing
println!
for logging is not recommended in production code. Consider using thetracing
macros for consistent and configurable logging.Apply these diffs to replace
println!
withtracing
macros:For line 76:
-println!("[state_sync] state sync completed. verifying"); +tracing::info!("[state_sync] state sync completed. verifying");For lines 97-100:
-println!( - "[state_sync] incorrect hash in prefix:{:?}", - hex::encode(prefix) -); +tracing::error!( + "[state_sync] incorrect hash in prefix:{:?}", + hex::encode(prefix) +);Also applies to: 97-100
87-91
: Include underlying error details when wrapping errorsCurrently, the error message does not include details from the underlying error, which can make debugging difficult. Including the error message provides more context.
Apply this diff to include the underlying error:
- AbciError::StateSyncInternalError( - "apply_snapshot_chunk unable to verify grovedb".to_string(), - ) + AbciError::StateSyncInternalError(format!( + "apply_snapshot_chunk unable to verify grovedb: {}", + e + ))
71-75
: Include underlying error details when wrapping errorsSimilar to the previous comment, include the error message from
commit_session
to aid in debugging.Apply this diff to include the underlying error:
- AbciError::StateSyncInternalError( - "apply_snapshot_chunk unable to commit session".to_string(), - ) + AbciError::StateSyncInternalError(format!( + "apply_snapshot_chunk unable to commit session: {}", + e + ))packages/rs-drive-abci/src/abci/app/consensus.rs (1)
40-50
: Refactor shared initialization code to avoid duplicationThe
snapshot_manager
initialization code is duplicated in bothConsensusAbciApplication
andFullAbciApplication
. Refactoring this into a common helper function promotes code reuse and maintainability.Consider creating a method in the
Platform
struct or a separate helper function to initialize theSnapshotManager
:impl<C> Platform<C> { fn create_snapshot_manager(&self) -> Result<SnapshotManager, Error> { let checkpoints_path = self .config .state_sync_config .checkpoints_path .to_str() .ok_or_else(|| { Error::InitializationError("Invalid checkpoints path: non-UTF8 characters present".to_string()) })? .to_string(); Ok(SnapshotManager::new( checkpoints_path, self.config.state_sync_config.max_num_snapshots, self.config.state_sync_config.snapshots_frequency, )) } }Then, in
ConsensusAbciApplication::new
:let snapshot_manager = platform.create_snapshot_manager()?;Adjust error handling as needed.
packages/rs-drive-abci/src/abci/app/full.rs (1)
40-50
: Refactor shared initialization code to avoid duplicationThe initialization code for
snapshot_manager
is duplicated across multiple files. Refactoring this code into a shared function improves maintainability.Refer to the suggestion made for
ConsensusAbciApplication
and apply the same refactoring here.packages/rs-drive-abci/tests/strategy_tests/state_sync.rs (1)
255-271
: Fix the elapsed time measurement for per-chunk processingThe
elapsed
variable is calculated usingstart_time.elapsed()
, which measures the total time since the beginning of the test, not the time taken to process the current chunk. To accurately measure the time per chunk, initialize a newInstant
at the start of each loop iteration.Apply this diff to correct the elapsed time measurement:
while let Some(chunk_id) = chunk_queue.pop_front() { + let start_time_chunk = Instant::now(); // ... existing code ... - let elapsed = start_time.elapsed(); + let elapsed = start_time_chunk.elapsed(); // ... existing code ... }packages/rs-drive-abci/src/abci/app/mod.rs (2)
28-32
: Add documentation for the SnapshotManagerApplication trait.The trait would benefit from more detailed documentation explaining:
- Its purpose and responsibilities
- When it should be implemented
- How it relates to state synchronization
/// Platform-based ABCI application pub trait SnapshotManagerApplication { - /// Returns Platform + /// Returns a reference to the SnapshotManager instance + /// + /// This trait provides snapshot management capabilities to ABCI applications, + /// enabling state synchronization features like creating, listing, and managing snapshots. fn snapshot_manager(&self) -> &SnapshotManager; }
52-59
: Add documentation for the SnapshotFetchingApplication trait.The trait would benefit from more detailed documentation explaining:
- Its purpose and responsibilities
- The relationship between snapshot fetching sessions and platform
- Generic parameter constraints
-/// Application that can maintain state sync +/// Application that manages state synchronization through snapshot fetching +/// +/// This trait provides capabilities for maintaining state synchronization by: +/// - Managing snapshot fetching sessions +/// - Accessing the platform instance for state operations +/// +/// Type Parameters: +/// - 'p: Lifetime of the Platform reference +/// - C: Type implementing core RPC functionality pub trait SnapshotFetchingApplication<'p, C> { /// Returns the current snapshot fetching session fn snapshot_fetching_session(&self) -> &RwLock<Option<SnapshotFetchingSession<'p>>>;packages/rs-drive-abci/src/abci/error.rs (1)
57-64
: LGTM! Consider adding documentation examples.The new error variants for state sync are well-structured and appropriately separated between client and server errors. Consider adding documentation examples to illustrate typical error scenarios.
Add documentation examples:
/// Client State sync bad request /// /// # Examples /// ``` /// // Example of a bad request when chunk height is invalid /// StateSyncBadRequest("Invalid chunk height: expected 100, got 50".to_string()) /// ``` #[error("bad request state sync: {0}")] StateSyncBadRequest(String), /// Server State sync bad request /// /// # Examples /// ``` /// // Example of an internal error when snapshot creation fails /// StateSyncInternalError("Failed to create snapshot: IO error".to_string()) /// ``` #[error("internal error state sync: {0}")] StateSyncInternalError(String),packages/rs-drive-abci/src/abci/handler/finalize_block.rs (2)
103-103
: Remove unnecessary clone for primitive type.The
clone()
call is unnecessary forblock_height
as it's likely a primitive type (u64/i64) which implements Copy.- .store(block_height.clone(), Ordering::Relaxed); + .store(block_height, Ordering::Relaxed);
105-114
: Improve snapshot creation error handling and conditional check.The snapshot creation logic can be improved in terms of error handling and Rust idioms.
- if (app.platform().config.state_sync_config.snapshots_enabled) { + if app.platform().config.state_sync_config.snapshots_enabled { app.snapshot_manager() .create_snapshot(&app.platform().drive.grove, block_height as i64) .map_err(|e| { - Error::Execution(ExecutionError::CorruptedDriveResponse(format!( - "Unable to create snapshot:{}", - e - ))) + Error::Execution(ExecutionError::CorruptedDriveResponse( + format!("Unable to create snapshot: {e}") + )) })?; }packages/rs-drive/src/error/drive.rs (1)
191-194
: Enhance snapshot error variant with more specific details.The current
Snapshot
error variant could be more informative by including specific error types or codes.Consider replacing with a more detailed variant:
- /// Error todo - #[error("snapshot error")] - Snapshot(String), + /// Snapshot operation errors + #[error("snapshot error: {kind}: {message}")] + Snapshot { + /// Type of snapshot error + kind: SnapshotErrorKind, + /// Detailed error message + message: String, + },Add the error kind enum:
/// Types of snapshot errors #[derive(Debug, Clone, PartialEq, Eq)] pub enum SnapshotErrorKind { /// Error creating snapshot Creation, /// Error loading snapshot Loading, /// Error applying snapshot chunk ChunkApplication, }packages/rs-drive-abci/.env.local (1)
16-16
: Add a descriptive comment forCHECKPOINTS_PATH
.Consider adding a descriptive comment explaining the purpose of this variable, similar to how
GROVEDB_LATEST_FILE
is documented.+# Path to store ABCI state sync checkpoints CHECKPOINTS_PATH=${DB_PATH}/checkpoints
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
Cargo.lock
is excluded by!**/*.lock
packages/dashmate/templates/platform/drive/tenderdash/config.toml.dot
is excluded by!**/*.dot
📒 Files selected for processing (22)
packages/rs-drive-abci/.env.local
(1 hunks)packages/rs-drive-abci/src/abci/app/consensus.rs
(4 hunks)packages/rs-drive-abci/src/abci/app/full.rs
(4 hunks)packages/rs-drive-abci/src/abci/app/mod.rs
(2 hunks)packages/rs-drive-abci/src/abci/app/state_source.rs
(1 hunks)packages/rs-drive-abci/src/abci/config.rs
(2 hunks)packages/rs-drive-abci/src/abci/error.rs
(1 hunks)packages/rs-drive-abci/src/abci/handler/apply_snapshot_chunk.rs
(1 hunks)packages/rs-drive-abci/src/abci/handler/finalize_block.rs
(3 hunks)packages/rs-drive-abci/src/abci/handler/list_snapshots.rs
(1 hunks)packages/rs-drive-abci/src/abci/handler/load_snapshot_chunk.rs
(1 hunks)packages/rs-drive-abci/src/abci/handler/mod.rs
(1 hunks)packages/rs-drive-abci/src/abci/handler/offer_snapshot.rs
(1 hunks)packages/rs-drive-abci/src/config.rs
(7 hunks)packages/rs-drive-abci/src/platform_types/mod.rs
(1 hunks)packages/rs-drive-abci/src/platform_types/snapshot/mod.rs
(1 hunks)packages/rs-drive-abci/src/server.rs
(2 hunks)packages/rs-drive-abci/tests/strategy_tests/main.rs
(1 hunks)packages/rs-drive-abci/tests/strategy_tests/state_sync.rs
(1 hunks)packages/rs-drive/Cargo.toml
(1 hunks)packages/rs-drive/src/error/drive.rs
(1 hunks)packages/rs-platform-version/Cargo.toml
(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- packages/rs-drive-abci/src/platform_types/mod.rs
- packages/rs-drive-abci/tests/strategy_tests/main.rs
⏰ Context from checks skipped due to timeout of 90000ms (17)
- GitHub Check: Rust packages (dpp) / Unused dependencies
- GitHub Check: Rust packages (dpp) / Tests
- GitHub Check: Rust packages (dpp) / Linting
- GitHub Check: Rust packages (dpp) / Check each feature
- GitHub Check: Rust packages (dash-sdk) / Unused dependencies
- GitHub Check: Rust packages (dash-sdk) / Tests
- GitHub Check: Rust packages (dash-sdk) / Linting
- GitHub Check: Rust packages (dash-sdk) / Check each feature
- GitHub Check: Rust packages (dapi-grpc) / Unused dependencies
- GitHub Check: Rust packages (dapi-grpc) / Tests
- GitHub Check: Rust packages (dapi-grpc) / Linting
- GitHub Check: Rust packages (dapi-grpc) / Check each feature
- GitHub Check: Build Docker images (Drive, drive, drive-abci) / Build Drive image
- GitHub Check: Build Docker images (DAPI, dapi, dapi) / Build DAPI image
- GitHub Check: Build Docker images (Dashmate helper, dashmate-helper, dashmate-helper) / Build Dashmate helper image
- GitHub Check: Build JS packages / Build JS
- GitHub Check: Rust crates security audit
🔇 Additional comments (8)
packages/rs-platform-version/Cargo.toml (1)
14-14
: Consider using a fixed version or commit hash instead of develop branch.Using a git dependency pointing to the
develop
branch can lead to build reproducibility issues and unexpected breaking changes. Consider one of these alternatives:
- Use a specific commit hash for better reproducibility:
grovedb-version = { git = "https://github.com/dashpay/grovedb", rev = "<commit-hash>" }
- Wait for a stable release and use version constraints:
grovedb-version = { version = "^2.2.0" }Let's verify the current state of the grovedb repository:
packages/rs-drive/Cargo.toml (1)
Line range hint
91-116
: Verify feature flags required for ABCI state sync.The PR implements ABCI state sync, but it's unclear which feature flags are required for this functionality. Please:
- Document which feature flags are necessary for state sync
- Consider creating a dedicated feature flag group for state sync functionality
- Ensure all required grovedb features are enabled
Let's analyze the feature usage:
packages/rs-drive-abci/src/abci/handler/load_snapshot_chunk.rs (1)
39-40
: Use consistent GroveDB versioningThe code uses
PlatformVersion::latest().drive.grove_version
to specify the GroveDB version. Ensure that the version used here aligns with the versions used elsewhere in the application to maintain compatibility.packages/rs-drive-abci/tests/strategy_tests/state_sync.rs (1)
71-303
: Test functionrun_state_sync
is well-structured and effectiveThe
run_state_sync
function sets up the environment, runs the chain with the defined strategy, and tests the state synchronization process comprehensively. The use of temporary directories, configuration settings, and performance measurements demonstrates good practices in test implementation.packages/rs-drive-abci/src/abci/app/mod.rs (1)
35-43
: Verify the impact of lifetime parameter change.The lifetime parameter change from 'a to 'p in
TransactionalApplication
trait could affect existing implementations.✅ Verification successful
Lifetime parameter change is safe
The change from 'a to 'p in
TransactionalApplication
trait is a non-breaking change as it only renames the generic lifetime parameter. Existing implementations inConsensusAbciApplication
andFullAbciApplication
already use matching lifetimes, and all trait bounds in handlers are correctly aligned with their function signatures.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for implementations of TransactionalApplication to verify the impact ast-grep --pattern 'impl$_ TransactionalApplication<$_> for $_'Length of output: 65
Script:
#!/bin/bash # Search for any mentions of TransactionalApplication rg "TransactionalApplication" -A 5 # Also try a more flexible ast-grep pattern ast-grep --pattern 'impl TransactionalApplication' # And check the specific file content cat "packages/rs-drive-abci/src/abci/app/mod.rs"Length of output: 10868
packages/rs-drive-abci/src/abci/handler/mod.rs (1)
38-48
: LGTM! Well-structured module organization.The new snapshot-related modules follow the existing pattern and are properly organized and exported.
Also applies to: 53-62
packages/rs-drive-abci/src/server.rs (1)
35-47
: Consider service registration order impact.While the implementation is correct, consider if the order of service registration matters for the ABCI protocol. Currently, we have:
- CheckTx service
- StateSource service
Verify if this order aligns with the ABCI protocol requirements or if it needs to be documented.
✅ Verification successful
Service registration order appears non-critical
After thorough verification of the codebase, there's no evidence suggesting that the ABCI service registration order is critical. The services appear to operate independently, handling different types of requests. The current implementation is correct as is.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any documentation or comments about ABCI service order requirements rg -i "service.*order" --type md rg -i "abci.*service.*order" --type mdLength of output: 284
Script:
#!/bin/bash # Search for ABCI service implementations and registration patterns ast-grep --pattern 'Server::builder().$$$add_service($$$)' # Look for comments about ABCI services and configuration rg -i "abci.*service" -A 3 -B 3 --type rust --type ts # Search for any configuration or setup related comments rg -i "(abci|service).*configuration" --type rustLength of output: 530
packages/rs-drive-abci/src/abci/config.rs (1)
90-106
: Review environment-specific configurations.The
snapshots_frequency
andmax_num_snapshots
values are identical for local and testnet environments. Consider if these should be different:
- Local environment might benefit from more frequent snapshots for development
- Testnet might need different retention policies
grovedb = { git = "https://github.com/dashpay/grovedb", branch = "develop", optional = true, default-features = false } | ||
grovedb-costs = { git = "https://github.com/dashpay/grovedb", branch = "develop", optional = true } | ||
grovedb-path = { git = "https://github.com/dashpay/grovedb", branch = "develop" } | ||
grovedb-storage = { git = "https://github.com/dashpay/grovedb", branch = "develop", optional = true } | ||
grovedb-version = { git = "https://github.com/dashpay/grovedb", branch = "develop" } | ||
grovedb-epoch-based-storage-flags = { git = "https://github.com/dashpay/grovedb", branch = "develop" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Pin grovedb dependencies to specific commit for reproducible builds
All grovedb dependencies are using the develop
branch, which can lead to inconsistent versions between builds. To ensure reproducibility and prevent potential compatibility issues:
- Replace all grovedb git dependencies with a specific commit hash:
grovedb = { git = "https://github.com/dashpay/grovedb", rev = "<commit-hash>", optional = true, default-features = false }
grovedb-costs = { git = "https://github.com/dashpay/grovedb", rev = "<commit-hash>", optional = true }
# ... apply same pattern to other grovedb dependencies
- Use the same commit hash across all packages that depend on grovedb to maintain consistency.
🔗 Analysis chain
Ensure consistent versioning across all grovedb dependencies.
All grovedb dependencies are pointing to the develop branch, which could lead to version incompatibilities if different components evolve at different rates. Consider:
- Using the same commit hash across all dependencies:
[dependencies]
grovedb = { git = "https://github.com/dashpay/grovedb", rev = "<commit-hash>", optional = true, default-features = false }
grovedb-costs = { git = "https://github.com/dashpay/grovedb", rev = "<commit-hash>", optional = true }
# ... apply same pattern to other grovedb dependencies
- Using workspace dependencies to ensure consistency:
[workspace.dependencies]
grovedb-common = { git = "https://github.com/dashpay/grovedb", rev = "<commit-hash>" }
[dependencies]
grovedb = { workspace = true, optional = true, default-features = false }
# ... reference workspace for other grovedb dependencies
Let's verify the dependency tree for potential conflicts:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if all grovedb dependencies resolve to the same commit
cargo tree | grep -A 1 'grovedb'
# Look for any dependency conflicts
cargo tree -d | grep 'grovedb'
Length of output: 255
Script:
#!/bin/bash
# Check for Cargo.lock and workspace configurations
fd "Cargo\.(lock|toml)" --type f --exec cat {}
# Look for any workspace configurations
rg "^\[workspace\]" -A 5
# Search for grovedb references in all Cargo.toml files
rg "grovedb.*git.*" --type toml
Length of output: 65647
Script:
#!/bin/bash
# Search for all Cargo.toml files and their content
fd "Cargo.toml" --type f --exec echo "=== {} ===" \; --exec grep -A 10 "grovedb" {} \;
# Look for any workspace dependency declarations
fd "Cargo.toml" --type f --exec grep -l "workspace.dependencies" {} \;
# Search for any commit hashes in git dependencies
fd "Cargo.toml" --type f --exec grep -A 1 "git.*grovedb.*rev\s*=" {} \;
Length of output: 4173
.checkpoints_path | ||
.to_str() | ||
.unwrap() | ||
.to_string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle potential None
value from to_str()
to avoid panic
Using unwrap()
on to_str()
may cause a panic if the path contains invalid Unicode. To prevent this, handle the Option
returned by to_str()
and provide an appropriate error message if the conversion fails.
Apply this diff to safely handle the conversion:
let snapshot_manager = SnapshotManager::new(
platform
.config
.state_sync_config
.checkpoints_path
- .to_str()
- .unwrap()
- .to_string(),
+ .to_str()
+ .ok_or_else(|| {
+ std::io::Error::new(
+ std::io::ErrorKind::InvalidData,
+ "Invalid checkpoints_path Unicode",
+ )
+ })?
+ .to_string(),
platform.config.state_sync_config.max_num_snapshots,
platform.config.state_sync_config.snapshots_frequency,
);
Also, update the function signature to return Result<Self, std::io::Error>
:
- pub fn new(platform: Arc<Platform<C>>) -> Self {
+ pub fn new(platform: Arc<Platform<C>>) -> Result<Self, std::io::Error> {
And handle the result accordingly when constructing StateSourceAbciApplication
.
Committable suggestion skipped: line range outside the PR's diff.
for element in incorrect_hashes.keys() { | ||
let subtree_path: Vec<&[u8]> = element.iter().map(|vec| vec.as_slice()).collect(); | ||
let path: &[&[u8]] = &subtree_path; | ||
let prefix = RocksDbStorage::build_prefix(path.into()).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential error from build_prefix()
instead of using unwrap()
Using unwrap()
on build_prefix()
can lead to a panic if an error occurs. It's safer to handle the Result
and return an appropriate error.
Apply this diff to handle the error:
-let prefix = RocksDbStorage::build_prefix(path.into()).unwrap();
+let prefix = RocksDbStorage::build_prefix(path.into()).map_err(|e| {
+ AbciError::StateSyncInternalError(format!(
+ "apply_snapshot_chunk failed to build prefix: {}",
+ e
+ ))
+})?;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let prefix = RocksDbStorage::build_prefix(path.into()).unwrap(); | |
let prefix = RocksDbStorage::build_prefix(path.into()).map_err(|e| { | |
AbciError::StateSyncInternalError(format!( | |
"apply_snapshot_chunk failed to build prefix: {}", | |
e | |
)) | |
})?; |
.to_str() | ||
.unwrap() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential None
from to_str()
instead of using unwrap()
Using unwrap()
on to_str()
may cause a panic if the path contains invalid UTF-8 characters. Consider handling the Option
to prevent a possible panic.
Apply this diff to handle the potential None
value:
-let snapshot_manager = SnapshotManager::new(
- platform
- .config
- .state_sync_config
- .checkpoints_path
- .to_str()
- .unwrap()
- .to_string(),
+let checkpoints_path_str = platform
+ .config
+ .state_sync_config
+ .checkpoints_path
+ .to_str()
+ .ok_or_else(|| {
+ Error::InitializationError("Invalid checkpoints path: non-UTF8 characters present".to_string())
+ })?
+ .to_string();
+
+let snapshot_manager = SnapshotManager::new(
+ checkpoints_path_str,
platform.config.state_sync_config.max_num_snapshots,
platform.config.state_sync_config.snapshots_frequency,
);
Committable suggestion skipped: line range outside the PR's diff.
.unwrap() | ||
.to_string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential None
from to_str()
instead of using unwrap()
Similar to the previous comment, using unwrap()
on to_str()
can lead to a panic if the path is invalid UTF-8. Handle the Option
to ensure robustness.
Apply this diff to handle the potential None
value:
-let snapshot_manager = SnapshotManager::new(
- platform
- .config
- .state_sync_config
- .checkpoints_path
- .to_str()
- .unwrap()
- .to_string(),
+let checkpoints_path_str = platform
+ .config
+ .state_sync_config
+ .checkpoints_path
+ .to_str()
+ .ok_or_else(|| {
+ Error::InitializationError("Invalid checkpoints path: non-UTF8 characters present".to_string())
+ })?
+ .to_string();
+
+let snapshot_manager = SnapshotManager::new(
+ checkpoints_path_str,
platform.config.state_sync_config.max_num_snapshots,
platform.config.state_sync_config.snapshots_frequency,
);
Committable suggestion skipped: line range outside the PR's diff.
let checkpoint_exists = |s: &Snapshot| -> bool { | ||
match GroveDb::open(&s.path) { | ||
Ok(_) => true, | ||
Err(_) => false, | ||
} | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent resource leaks by avoiding unnecessary GroveDb::open
calls
Opening a GroveDb
instance without proper closure can lead to resource leaks. Since you only need to check if the checkpoint exists, consider checking if the directory exists instead of opening the database.
Apply this diff to avoid resource leaks:
+use std::path::Path;
let checkpoint_exists = |s: &Snapshot| -> bool {
- match GroveDb::open(&s.path) {
- Ok(_) => true,
- Err(_) => false,
- }
+ Path::new(&s.path).exists()
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let checkpoint_exists = |s: &Snapshot| -> bool { | |
match GroveDb::open(&s.path) { | |
Ok(_) => true, | |
Err(_) => false, | |
} | |
}; | |
let checkpoint_exists = |s: &Snapshot| -> bool { | |
Path::new(&s.path).exists() | |
}; |
/// Path to checkpoints | ||
pub checkpoints_path: PathBuf, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve path configuration handling.
The checkpoints path is hardcoded for each environment. Consider:
- Making it configurable through environment variables
- Using platform-agnostic path handling
- Ensuring the path exists or is created during initialization
pub struct StateSyncAbciConfig {
// ...
- pub checkpoints_path: PathBuf,
+ #[serde(default = "default_checkpoints_path")]
+ pub checkpoints_path: PathBuf,
}
+impl StateSyncAbciConfig {
+ fn default_checkpoints_path() -> PathBuf {
+ std::env::var("DASH_PLATFORM_CHECKPOINTS_PATH")
+ .map(PathBuf::from)
+ .unwrap_or_else(|_| PathBuf::from("/var/lib/dash-platform/data/checkpoints"))
+ }
+}
Also applies to: 92-94, 101-103, 110-112
/// Configuration for StateSync feature | ||
#[derive(Clone, Debug, Serialize, Deserialize)] | ||
pub struct StateSyncAbciConfig { | ||
/// Enable snapshot | ||
pub snapshots_enabled: bool, | ||
|
||
/// Path to checkpoints | ||
pub checkpoints_path: PathBuf, | ||
|
||
/// Frequency of snapshot creation (in blocks) | ||
pub snapshots_frequency: i64, | ||
|
||
/// Maximum number of snapshots to keep | ||
pub max_num_snapshots: usize, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation for configuration values.
The struct should validate that:
snapshots_frequency
is positivemax_num_snapshots
is greater than zero
+impl StateSyncAbciConfig {
+ pub fn validate(&self) -> Result<(), &'static str> {
+ if self.snapshots_frequency <= 0 {
+ return Err("snapshots_frequency must be positive");
+ }
+ if self.max_num_snapshots == 0 {
+ return Err("max_num_snapshots must be greater than zero");
+ }
+ Ok(())
+ }
+}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
/// Configuration for StateSync feature | |
#[derive(Clone, Debug, Serialize, Deserialize)] | |
pub struct StateSyncAbciConfig { | |
/// Enable snapshot | |
pub snapshots_enabled: bool, | |
/// Path to checkpoints | |
pub checkpoints_path: PathBuf, | |
/// Frequency of snapshot creation (in blocks) | |
pub snapshots_frequency: i64, | |
/// Maximum number of snapshots to keep | |
pub max_num_snapshots: usize, | |
} | |
/// Configuration for StateSync feature | |
#[derive(Clone, Debug, Serialize, Deserialize)] | |
pub struct StateSyncAbciConfig { | |
/// Enable snapshot | |
pub snapshots_enabled: bool, | |
/// Path to checkpoints | |
pub checkpoints_path: PathBuf, | |
/// Frequency of snapshot creation (in blocks) | |
pub snapshots_frequency: i64, | |
/// Maximum number of snapshots to keep | |
pub max_num_snapshots: usize, | |
} | |
impl StateSyncAbciConfig { | |
pub fn validate(&self) -> Result<(), &'static str> { | |
if self.snapshots_frequency <= 0 { | |
return Err("snapshots_frequency must be positive"); | |
} | |
if self.max_num_snapshots == 0 { | |
return Err("max_num_snapshots must be greater than zero"); | |
} | |
Ok(()) | |
} | |
} |
/// State sync configuration | ||
pub state_sync_config: StateSyncAbciConfig, | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include state_sync_config in deserialization intermediate struct.
The state_sync_config
field is not included in the PlatformConfigIntermediate
struct, which means it won't be deserialized from environment variables or config files.
struct PlatformConfigIntermediate {
// ... other fields ...
+ #[serde(default)]
+ pub state_sync_config: StateSyncAbciConfig,
}
impl<'de> Deserialize<'de> for PlatformConfig {
fn deserialize<D>(deserializer: D) -> Result<PlatformConfig, D::Error>
where
D: Deserializer<'de>,
{
let config = PlatformConfigIntermediate::deserialize(deserializer)?;
Ok(PlatformConfig {
// ... other fields ...
- state_sync_config: StateSyncAbciConfig::default(),
+ state_sync_config: config.state_sync_config,
})
}
}
Also applies to: 281-281
@@ -12,6 +12,12 @@ ABCI_LOG_STDOUT_FORMAT=pretty | |||
ABCI_LOG_STDOUT_COLOR=true | |||
|
|||
DB_PATH=/tmp/db | |||
|
|||
CHECKPOINTS_PATH=${DB_PATH}/checkpoints |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
The codebase has a proper default production path (/var/lib/dash-platform/data
), but it's being overridden by environment files to use temporary storage:
.env.mainnet
:DB_PATH=/tmp/db
.env.testnet
:DB_PATH=/tmp/db
This configuration will lead to data loss on system reboot. Update the environment files to use persistent storage paths instead of /tmp/db
.
🔗 Analysis chain
Verify database paths for production environments.
Both CHECKPOINTS_PATH
and GROVEDB_LATEST_FILE
inherit from DB_PATH
which is set to /tmp/db
. While this is fine for local development, using /tmp
in production could lead to data loss as it's typically cleared on system reboot.
Run this script to check if there are any production environment files with similar configuration:
Also applies to: 19-19
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for production environment files and their DB_PATH configuration
# to ensure they don't use temporary directories.
echo "Searching for environment files:"
fd -t f "^\.env\.(production|prod)" packages/rs-drive-abci/
echo -e "\nChecking DB_PATH in found files:"
rg "^DB_PATH=" packages/rs-drive-abci/.env.*
Length of output: 702
Script:
#!/bin/bash
# Check for deployment documentation and configuration
echo "Searching for deployment-related files and documentation:"
fd -t f "^(deploy|docker|k8s|kubernetes|README)" packages/rs-drive-abci/
echo -e "\nChecking content of found files for database path configuration:"
rg -i "(\bdb[_\s-]path|database[_\s-]path|data[_\s-]dir)" packages/rs-drive-abci/
Length of output: 2638
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
packages/rs-drive-abci/src/abci/app/consensus.rs (1)
41-51
:⚠️ Potential issueHandle potential UTF-8 conversion failure
The
to_str()
call on the checkpoints path could fail if the path contains invalid UTF-8 characters.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/rs-drive-abci/src/abci/app/consensus.rs
(3 hunks)packages/rs-drive-abci/src/server.rs
(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Build Docker images (Drive, drive, drive-abci) / Build Drive image
- GitHub Check: Build Docker images (DAPI, dapi, dapi) / Build DAPI image
- GitHub Check: Build Docker images (Dashmate helper, dashmate-helper, dashmate-helper) / Build Dashmate helper image
- GitHub Check: Build JS packages / Build JS
🔇 Additional comments (3)
packages/rs-drive-abci/src/abci/app/consensus.rs (3)
24-35
: Good use of lifetime documentation and field organizationThe struct is well-organized with clear documentation for the lifetime parameter. The new fields for snapshot management are logically grouped.
68-82
: Well-structured trait implementationsThe new trait implementations for
SnapshotManagerApplication
andSnapshotFetchingApplication
are clean and follow good separation of concerns.
192-218
: Verify error handling in snapshot operationsThe snapshot-related ABCI methods are properly integrated, but we should verify the error handling in the corresponding handlers.
Let's check the error handling implementation:
✅ Verification successful
Error handling in snapshot operations is robust and comprehensive
All snapshot handlers implement proper error handling with:
- Detailed error messages for debugging
- Proper error propagation and mapping
- Comprehensive coverage of error cases
- Consistent cleanup on errors
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check error handling in snapshot handlers # Expected: Each handler should have proper error handling and mapping # Look for error handling patterns in snapshot handlers rg -A 5 "fn (offer_snapshot|apply_snapshot_chunk|list_snapshots|load_snapshot_chunk)" packages/rs-drive-abci/src/abci/handler/Length of output: 2494
Script:
#!/bin/bash # Check error handling implementation in snapshot handlers # Look for error types and handling patterns # First, check for error type definitions rg "pub.*enum.*Error" packages/rs-drive-abci/src/abci/ # Then check actual implementations in handlers for file in offer_snapshot apply_snapshot_chunk list_snapshots load_snapshot_chunk; do echo "=== $file implementation ===" cat "packages/rs-drive-abci/src/abci/handler/${file}.rs" doneLength of output: 12836
@@ -31,6 +32,7 @@ pub fn start( | |||
|
|||
let check_tx_service = | |||
CheckTxAbciApplication::new(Arc::clone(&platform), Arc::new(check_tx_core_rpc)); | |||
let state_source_service = StateSourceAbciApplication::new(Arc::clone(&platform)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add state source service to the gRPC server
The state_source_service
is instantiated but not added to the gRPC server. This will prevent the state source functionality from being accessible.
Add the service to the gRPC server by modifying the builder chain:
let grpc_server = dapi_grpc::tonic::transport::Server::builder()
.add_service(dapi_grpc::platform::v0::platform_server::PlatformServer::new(query_service))
.add_service(
tenderdash_abci::proto::abci::abci_application_server::AbciApplicationServer::new(
check_tx_service,
),
);
+ .add_service(
+ tenderdash_abci::proto::abci::abci_application_server::AbciApplicationServer::new(
+ state_source_service,
+ ),
+ );
Also applies to: 37-43
Issue being fixed or feature implemented
Implementation of state sync abci applications.
What was done?
SnapshotManager
and its related configuration inPlatformConfig
StateSourceAbciApplication
for all the replication reading operations (for the source nodes)ConsensusAbciApplication
FullAbciApplication
for testsHow Has This Been Tested?
New strategy test. Comparing root hash, verifying grovedb after state sync is complete
Breaking Changes
to be determined
Checklist:
For repository code-owners and collaborators only
Summary by CodeRabbit
Summary by CodeRabbit
Based on the comprehensive changes, here are the release notes:
New Features
StateSourceAbciApplication
service to handle additional state source requests.Configuration
Performance
Compatibility
Testing