generated from ipfs/ipfs-repository-template
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bootstrap process #47
Closed
Labels
scope/required
Feature required to match go-libp2p-kad-dht
Comments
guillaumemichel
added
the
scope/required
Feature required to match go-libp2p-kad-dht
label
Jun 30, 2023
Added as a subtask of #2 |
iand
added a commit
that referenced
this issue
Aug 16, 2023
This adds a state machine for running the bootstrap query described in #45. The state machine is very simple (it runs a query that attempts to find the self node), but it drives the design of the coordinator in a number of ways: - The coordinator now manages two state machines: bootstrap and user queries. It enforces the constraint that no user queries can be progressed while the bootstrap is running. This establishes the pattern for managing a set of state machines. - Priority is simple: the coordinator first attempts to advance the bootstrap state machine and only if it is idle, indicating the bootstrap has no further work, will it proceed to advance the query pool state machine. - This changes the design of the state machine. Previously the state machine reacted to an incoming event passed to the `Advance` method. However this causes complications in the presence of multiple state machines. What should happen if the bootstrap is waiting for a message response but the caller attempts to start a user query? The coordinator needs to remember the "add query" event until the bootstrap is complete, so that event would need to remain on the coordinator's queue. But the coordinator needs to read from the queue to detect if an incoming message response is available for the bootstrap, without removing the "add query" event. Thus we need separate queues for the two state machines. Rather than manage those in the coordinator, we give each state machine its own event queue. External callers enqueue events and the state machine dequeues the next one each time it attempts to advance state. - ~The above change leads to a simple interface for state machines: an Enqueue method for notifying a new event and an Advance method that returns the next state.~ - **update 2023-08-15**: The above change leads to a simple interface for state machines:an Advance method that accepts an event and returns the next state. - Coordinator methods like StartQuery and StopQuery now enqueue an event for query pool - A new Bootstrap method enqueues an event for bootstrap state machine - **update 2023-08-15**: the queues for the state machines are managed by the coordinator, which allows state machines to be more cleanly composed into hierarchies (for example, the state machine managing the routing table include queue will use a query pool state machine and this change eliminates the need to manage event queues of child state machines) There are still some ugly parts which I may be able to fix within the scope of this PR: - the coordinator implements a number of unused methods to conform to the scheduler.Scheduler interface. All that is needed is the RunOne method. - ~the name of the bootstrap query needs to be factored into a constant or remembered by the coordinator~ - coordinator now uses a separate callback to deal with bootstrap query instead of checking query id - ~events are action.Action interfaces so they can use the standard queue interface. The Run method is unused. The queue could simply be a channel or we could modify the queue interface to be parameterised by type, allowing us to have a queue of BootstrapEvents~ (**removed 2023-08-15**) - currently the bootstrap method expects a function that generates a FindNode request for the given node. FindNode is such a fundamental DHT operation that I think it should be provided as a method by the Endpoint Fixes #47
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Once connected to a first peer in the DHT network, the DHT node must lookup its own key and fill its routing table's buckets.
One way to implement it is to run the routing table refresh. Alternatively it is possible to lookup for its own key, and for random peer ids in each of its bucket, in order to populate the buckets.
cc: @iand
Related to #2 and #45
The text was updated successfully, but these errors were encountered: