-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce risk of losing data due to garbage collection or restarts #2252
Draft
jbearer
wants to merge
8
commits into
main
Choose a base branch
from
jb/gc
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Previously, we would delete all data from consensus storage up to the decided view, with each new decide. In the case where we miss a decide event, this can cause us to delete the associated data (e.g. DA, VID proposals) from a view that we never actually moved to archival storage. This in turn makes it much harder to use any guarantees from consensus about certain nodes posessing certain data after a decide, because we are garbage collecting such data. One particularly bad case is about DA nodes. Consensus guarantees that some honest DA nodes will see every DA proposal. However, if those nodes all restart at the same time shortly after seeing a DA proposal, they may miss the corresonding quorum proposal and decide. Then, when they restart, they may end up garbage collecting that DA proposal before anyone realizes that it did get decided, and now no one has the data. The new technique is more conservative. We only garbage collect specific views or ranges of views for which we know we have successfully processed all decide events. Other data, whether it is for views that never decided or for views where we missed a decide (we cannot immediately tell the difference) will be retained indefinitely. This ensures we never lose data before it is archived, and allows us to manually rebuild an incomplete archive after it is discovered. It also enables us to implement a catchup data source that pulls from this store of undecided, un-garbage-collected data, so that the archive can automatically rebuild itself. Of course, indefinitely retaining data which may not even have been decided is undesirable. The next commit will add pruning, so that all data is deleted after a certain number of views, even if we never archived it. Then the guarantee will be: we can always recover a complete archive, based on the guarantees of consensus, as long as recover completes within a certain (configurable) time period.
Add target, minimum retentions and target usage, like the archival pruner. This allows us to take full advantage of the storage space if we have it, keeping data around for longer, while still ensuring we keep it around long *enough* even if we are low on space.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Partially addresses an issue we saw on
water
, where all DA nodes restarting at the same time can cause loss of data. The problematic sequence of events is:This change addresses this potential problem in two ways:
This PR:
This PR does not:
Fully solve the restart issue. Specifically: the protocol guarantees that at least one DA node will store every DA proposal, and this change ensures every stored DA proposal will eventually make it to archival storage. We do not have the same guarantee for quorum proposals and certificates, because DA nodes are not required to form a QC. Thus, we can still have a situation where every DA node misses a quorum proposal or certificate, and furthermore, the query service needs the quorum proposal for the block header before it can even try to fetch a missing DA proposal. This problem is solved by having non-DA nodes run a version of the availability service which stores leaves but not payloads. That will be facilitated by sqlite support.