-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: extend v1.7.2 transfer location migration #1858
chore: extend v1.7.2 transfer location migration #1858
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## polkadot-v1.7.2-transfer-location #1858 +/- ##
=====================================================================
- Coverage 46.74% 46.55% -0.20%
=====================================================================
Files 166 167 +1
Lines 13044 13076 +32
=====================================================================
- Hits 6097 6087 -10
- Misses 6947 6989 +42 ☔ View full report in Codecov by Sentry. |
But how can we know we need to migrate them or re-do them if we remove all of them? Just because the transfer filter is not filtering as expected? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great improvement over the base migration @wischli! All cases are well tied 💯
Self::migrate_location_key(transfer_address.clone(), hash), | ||
)), | ||
(None, old::RestrictedTransferLocation::Xcm(_)) => { | ||
log::warn!("{LOG_PREFIX} Account {account_id:?} missing in AccountMap despite old XCM location storage"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great check!
currency_id, | ||
old_restricted_location, | ||
// Leads to storage entry removal | ||
// TODO: Discuss whether we are fine with removing such storage entries or whether we want to keep the old undecodable ones or maybe just use same account id per default instead of removing? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- removing such storage entries
- keep the old undecodable ones
- use same account id per default instead of removing
I would vote for 1. and 2.
// 4dTeMxuPJCK7zQGhFcgCivSJqBs9Wo2SuMSQeYCCuVJ9xrE2 --> 5Fc9NzKzJZwZvgjQBmSKtvZmJ5oP6B49DFC5dXZhTETjrSzo | ||
pub AccountMap: Vec<(AccountId, AccountId)> = vec![ | ||
( | ||
AccountId::new(hex_literal::hex!("5dbb2cec05b6bda775f7945827b887b0e7b5245eae8b4ef266c60820c9377185")), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This account is 4dTeMxuPJCK7zQGhFcgCivSJqBs9Wo2SuMSQeYCCuVJ9xrE2
in hex, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
❯ subkey inspect 4dTeMxuPJCK7zQGhFcgCivSJqBs9Wo2SuMSQeYCCuVJ9xrE2
Public Key URI `4dTeMxuPJCK7zQGhFcgCivSJqBs9Wo2SuMSQeYCCuVJ9xrE2` is account:
Network ID/Version: centrifuge
Public key (hex): 0x5dbb2cec05b6bda775f7945827b887b0e7b5245eae8b4ef266c60820c9377185
Account ID: 0x5dbb2cec05b6bda775f7945827b887b0e7b5245eae8b4ef266c60820c9377185
Public key (SS58): 4dTeMxuPJCK7zQGhFcgCivSJqBs9Wo2SuMSQeYCCuVJ9xrE2
I would say by looking at the logs at the time of the migration which I always do for runtime upgrades and we have access to historic storage by querying it for an older block hash in which it still existed. The only upside for keeping it is that it is easier to reconstruct and check against on the runtime level. |
Ok, then we can remove them. I think the probability to have new entries is low |
You can merge it when you want 💯 |
f457f22
into
polkadot-v1.7.2-transfer-location
* migration for restricted transfer location * add correct location for centrifuge * minor changes * fix benchmarks * fix IT compilation * chore: extend v1.7.2 transfer location migration (#1858) * feat: add pre + post-upgrade * feat: add nuking migration for dev and demo * fix: remove account list for demo * refactor: Use `VersionedLocation` in `RestrictedTransferLocation` (#1861) * feat: add pre + post-upgrade * feat: add nuking migration for dev and demo * fix: remove account list for demo * refactor: use VersionedLocation * fix: transfer allowlist benchmarks + tests * fix: clippy * fix: benchmark cli * fix: revert tight coupling * fmt * fix IT compilation --------- Co-authored-by: William Freudenberger <[email protected]>
Description
Extends #1853 by
To be discussed
This PR is more aggressive with storage removal than the base one: It removes any old storage entry which cannot be decoded versus only removing, if it can be decoded and a new v4 location can be created.
My argument is that undecodable storage keys are always a pain to clean up. If new storage entries pop up in between proposing the RU and executing it, we can either push a hotfix PR afterwards migrating those keys or communicating that these users need to re-do their transfer allowlist call manually.
Another option would be exposing a
migrate
extrinsic in thepallet-transfer-allowlist
for undecodable keys and keeping them in storage.Checklist: