Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement SnapSync for Juno #1828

Open
8 of 16 tasks
stranger80 opened this issue Apr 16, 2024 · 3 comments
Open
8 of 16 tasks

Implement SnapSync for Juno #1828

stranger80 opened this issue Apr 16, 2024 · 3 comments
Assignees
Labels
Milestone

Comments

@stranger80
Copy link

stranger80 commented Apr 16, 2024

Tasks:

  • Leaf proof and verification

    • ...including the iterator (Rian)
  • Leaf range (Pawel & Rian) Implement the leaf range extraction and verification #1871

  • Refactor the original PoC code to align with the new range query interface (Ashraf)

  • Offline copy state via range (Ashraf)

    • Build the trie based on range query responses
  • (somewhere at this point) ability to prove any state (or n-most recent states)

  • 1. **[4 days + 2 days?](Actual: 12 days) ** Integrate with p2p (Ashraf & Pawel)

    • handle p2p messages, refactor snap_server - 2 days
    • add startup flags & start snap syncher at a node start - 1 day
    • prepare synced "sepolia" node and make fresh snap sync - to observe p2p is working - 1 day
    • fix storage ranges issues - 2 days?
  • 2 [2 days + 2 days?]/Actual: 8 days) Perform static sync on cloud VM that contains

    • feeder node with sepolia snapshot and syncing disabled
    • fresh node syncing from above
  • 3. [1 day]/(Actual: 2 days) Implement a hidden CLI command flag in Juno to explicitly launch it in SnapSync mode
    (in review Add sync mode p2p CLI flag #2186)

    • The flag has no effect on the p2p handlers launched - Juno will launch all p2p handlers (including SnapSync specific ones) in any case
    • The flag will determine behaviour of the node when launching (the scenario of node launching in p2p mode is explained here: (link required) )
    • Merge SnapSync codebase into main

Status on 7th Oct:

  • All above have been completed and yielded additional improvement work (as mentioned in point 4. below)

  • Final PR for task 2. requires merging (Snapsync with p2p #2120)

  • Elapsed actual time spent was 22 days, so the accuracy of original estimations was 11 days est vs 22 days act.

  • We extrapolate this to all remaining estimations.

  • 4. [2 days] Implement robustness improvements in P2P handlers

    • Blanket retry mechanism on P2P layer
    • ...anything else?
  • 5. [] Snapshot Layer

  • 5. [3 days + 3 days] Implement healing = remember start block and last block jumped to apply state diffs

    • this needs to be tested on a progressing sepolia network?
  • 6. [???] Ensure that node operates properly after SnapSync phase completes:

    • Once a node finished the SnapSync process (state is up to date but blocks are out of sync) the node must continue with "standard" syncing process to catch up with blockchain.
    • It needs to be ensured that "handover" from SnapSync to standard syncing is seamless and does not corrupt Juno database
  • 7. [1 day + 1 day] Implement the verification of remote node's capabilities (ie. does it support SnapSync?) when connecting to fetch SnapSync-specific data

  • 8. [5 days + 5 days] Sync the sepolia via snapsync (2 nodes) & fix all issues in the process

    • this is tested on progresing sepolia network (not static)
  • 9. [2 days + 2 days] Open the PR with snap-sync-p2p branch & code review

    • add more tests - push a code cov a bit higher - 2 days
  • 10. [5 days] Present and share the SnapSync implementation to other Client teams

    • Raise all P2P Specs updates to SW
    • Discuss the SnapSync impl details with other teams
    • Support other teams in their SnapSync implementations
@stranger80 stranger80 added this to the v0.12.0 milestone Apr 16, 2024
@stranger80 stranger80 moved this from Todo to In Progress in [Nubia] Juno May 14, 2024
@asdacap
Copy link
Contributor

asdacap commented May 20, 2024

type ContractRangeStreamingResult struct {
    ContractsRoot *felt.Felt
    ClassesRoot   *felt.Felt
    Range         []*spec.ContractState
    RangeProof    *spec.PatriciaRangeProof
}

type StorageRangeRequest struct {
    StateRoot     *felt.Felt
    ChunkPerProof uint64 // Missing in spec
    Queries       []*spec.StorageRangeQuery
}

type StorageRangeStreamingResult struct {
    ContractsRoot *felt.Felt
    ClassesRoot   *felt.Felt
    Range         []*spec.ContractStoredValue
    RangeProof    *spec.PatriciaRangeProof
}

type ClassRangeStreamingResult struct {
    ContractsRoot *felt.Felt
    ClassesRoot   *felt.Felt
    Range         []*spec.Class
    RangeProof    *spec.PatriciaRangeProof
}

type SnapServer interface {
    GetContractRange(request *spec.ContractRangeRequest) (chan *ContractRangeStreamingResult, error)
    GetStorageRange(request *StorageRangeRequest) (chan *StorageRangeStreamingResult, error)
    GetClassRange(request *spec.ClassRangeRequest) (chan *ClassRangeStreamingResult, error)
}

@asdacap
Copy link
Contributor

asdacap commented May 20, 2024

@asdacap
Copy link
Contributor

asdacap commented Jun 13, 2024

Changed the snap server interface.

type ContractRangeStreamingResult struct {
	ContractsRoot *felt.Felt
	ClassesRoot   *felt.Felt
	Range         []*spec.ContractState
	RangeProof    *spec.PatriciaRangeProof
}

type StorageRangeRequest struct {
	StateRoot     *felt.Felt
	ChunkPerProof uint64 // Missing in spec
	Queries       []*spec.StorageRangeQuery
}

type StorageRangeStreamingResult struct {
	ContractsRoot *felt.Felt
	ClassesRoot   *felt.Felt
	Range         []*spec.ContractStoredValue
	RangeProof    *spec.PatriciaRangeProof
}

type ClassRangeStreamingResult struct {
	ContractsRoot *felt.Felt
	ClassesRoot   *felt.Felt
	Range         *spec.Classes
	RangeProof    *spec.PatriciaRangeProof
}

type SnapServer interface {
	GetContractRange(ctx context.Context, request *spec.ContractRangeRequest) iter.Seq2[*ContractRangeStreamingResult, error]
	GetStorageRange(ctx context.Context, request *StorageRangeRequest) iter.Seq2[*StorageRangeStreamingResult, error]
	GetClassRange(ctx context.Context, request *spec.ClassRangeRequest) iter.Seq2[*ClassRangeStreamingResult, error]
	GetClasses(ctx context.Context, classHashes []*felt.Felt) ([]*spec.Class, error)
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: In Progress
Development

No branches or pull requests

2 participants