-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic Honey Badger sender queue #226
Comments
In case of the DHB sender queue the extra complexity arises from the management issues associated with the embedded HB instance. In particular, the messages send by HB should be post-processed by the containing DHB. Post-processing allows to queue HB messages beyond the restart of the embedded HB instance. Two kinds of post-processing is required.
|
I don't think they should drop it: If I receive
|
I agree. In fact, in the code
The initial design changes accordingly. The epoch jump is a great optimisation. Or is it more than an optimisation in your view? I didn't add any epoch skips in this design because I thought it would be conceptually simpler to go through all queued messages. On the other hand, skipping the epochs in which there was no HB output makes perfect sense.
Sure, I'd just use an enum with variants |
I agree with using an enum. |
Agreed! By an epoch skip I meant starting in HB epoch other than 0. |
A problem still remains: What if the observer node is not accepting the epoch of the |
To sum up on the observer problem. Letting |
After some discussion I now think we do need to keep a map of observers in I also realized that regarding observers we already make it the user's responsibility to keep an outgoing queue: The API requires that the user makes sure the observer is sent all messages starting at a specific epoch, regardless of when they actually manage to establish a connection. That's inconsistent: Either the user should have all the responsibility to manage the queues, or none. Option 1: Observers need to register, user doesn't need to queue.This proposal would remove the need to even queue outgoing messages for observers. (Or would it? What is the user supposed to do if a peer disconnects?) Let's add a separate map with public keys to
We would probably even get rid of With this proposal, we could later implement the optimization where we replace a whole batch of queued messages to a lagging node with just a single message, containing the proof of that epoch's outcome. Option 2: Make it the user's responsibility to do all of the above.Expose all information about a message's epoch (both DHB and HB) and an instance's current epoch(s) to the user, so they can implement the queue. The Calling |
It turns out that observers are not essential for making changes in the set of validator nodes. DHB can manage an internal set of nodes for which there is an ongoing vote for |
I'm not entirely sure but I'd lean towards deprecating the concept of observers as well, leaving it to a higher layer. We should probably implement the functionality described in Option 1 in a separate module in either hbbft or hydrabadger, structured in such a way that it can be integrated into Parity, or at least be used as a reference implementation. |
Actually if it could be added to hydrabadger experimentally for now that would be great. Keeping it separate might also help clarify the API required. |
Issue #43 needs an extra bit of design for queueing and dequeueing Honey Badger (HB) messages. An instance of Dynamic Honey Badger (DHB) contains an embedded instance of HB. This embedded HB instance is restarted on DKG change or completion events. When restarting it, any queued messages should be stored in the DHB instance for later delivery.
The text was updated successfully, but these errors were encountered: