-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
release: create v0.3.3 rc2 release branch #782
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
With this commit we allow the universe RPC courier to short cut the iterative proof retrieval process if we arrive at a proof in the chain that we already have in the local archive. And since the local archive only stores full chains of proofs, once we get one from it, the full retrieval process is complete.
With this commit we make sure that proofs in the database don't collide if they use the same script key but different anchor transactions. This situation can occur if a TAP address receives multiple transfers. We also make sure we can query for the correct proof if we also specify the outpoint in the query.
Since we now have a JOIN with the managed_utxos in the UpsertAssetProof we cannot use that to update proofs that haven't been re-anchored yet. But because we re-anchor passive assets only _after_ updating the proofs, this would previously result in no rows being updated. We could change the order of operations instead, but having WHERE clause for a specific database ID mixed in with optional value based queries wasn't super beautiful in the first place. So we opt for having a more explicit upsert for database ID based proof identification.
This is a simple code move commit that removes a parsing function from the integration test and uses a commonly available one instead.
To make sure we don't accidentally overwrite a proof file if we receive to a TAP address multiple times, we also use the first couple of bytes of the outpoint TXID and index in the file name. We don't use the full outpoint as in some operating systems the full path for a file is not allowed to exceed 256 characters (the path and file name combined). And since we already use 130 characters for the hex encoded asset ID and script key, we need to shorten the outpoint somewhat. We will add a migration that renames existing files on disk in the next commit.
With this commit we make sure that whenever we start the file archive we migrate any proof files that use the old name to the new naming scheme.
Since we now require the proof outpoint to be specified in order to fetch the correct proof, we make sure we supply that part of the proof locator in all situations. We also make sure we specify the outpoint when storing proofs.
To get a unique lookup key we mis-used a proof locator's hash as the key to look up passive asset proofs quickly without needing to scan a slice. Because we don't have the same data available when creating the map as we do when accessing it further down in the asset database, we got a mismatch in the hash and proofs couldn't be found (mainly due to not having access to the previous outpoint of the passive assets being re-anchored). As a compromise we now map the proofs by asset ID and have a slice of proofs in case there are multiple passive assets with the same asset ID. This is better than having a completely flat slice of proofs (as we don't have to scan through all of them) but still requires us to do _some_ iterating.
This commit allows the ExportProof and ProveAssetOwnership RPC methods to be called with an outpoint as well to disambiguate in case of multiple proofs with the same script key (e.g. multiple receives to the same TAP address).
Because we can now end up importing proofs that we already have (for example when sending to our own TAP address using a universe courier, we will pull the proof from the local universe and import it into our store again). Before turning this into an upsert, we would end up with an asset that had two identical entries in the previous witness list. To make sure the order of multiple witnesses is kept, we also need to add a witness_index field that we can use for sorting. Unfortunately that breaks the data migration demo test as the FetchAllAssets query also loads the witness and the query for that uses the witness_index field that doesn't exist at that point. Since the current test is only a demo we just change it to fetch the (unchanged) managed UTXOs instead.
When starting the custodian and we get an error, we actually want to inspect the error. Otherwise we just get a timeout when listening for the subscription signals which doesn't really tell us what's wrong. This helped us debug an issue with a unit test in the previous commit.
This fixes a couple of instances where the porter used the wrong outpoint for proof locators when creating new proofs for a send package. This was previously not noticed because the outpoint in the locator was ignored by both the file based and database archive. With the outpoint now being mandatory, this lead to failures in the integration tests.
This fixes a bug reported by a user running v0.3.3-rc1. Although the situation can only happen if the daemon is shut down in exactly the wrong moment (or, more likely, due to an otherwise inconsistent database state), it can happen that the multiverse reports a proof is available but it wasn't yet imported into the local archive. To make sure we can definitely rely on the proof being in the asset DB when trying to complete a receive event, we double check and re-import the proof if necessary.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Includes: