-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Double mnemonic #432
Double mnemonic #432
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #432 +/- ##
===========================================
+ Coverage 94.15% 94.20% +0.04%
===========================================
Files 58 59 +1
Lines 7189 7264 +75
===========================================
+ Hits 6769 6843 +74
- Misses 420 421 +1 ☔ View full report in Codecov by Sentry. |
Before we proceed with this PR, I would like to see a more detailed context, including a clear definition of the feature and the term 'double mnemonic.' Additionally, the goals and demands being addressed. |
Pr 432 bip39accumulator
…ouble-mnemonic-318
Finished this PR for review! |
Pr 432 tests and fixes from @jdlcdl !
…ouble-mnemonic-318
As of latest optimization, uncontrolled testing (using Amigo, doing double-mnemonic again and again, watching the usb console) is showing that a try takes just less than 6ms. My testing over many variations of this pr, finding 77 double mnemonics indicates that a double was found roughly once in 246 tries. Is my math correct here on the expected probability being 1/256? In Tadeu's implementation here, a try is done knowing the first and the second valid 12-word mnemonics, therefore putting these together into a 24-word mnemonic will have a valid last word 1 in 256 times just like any other 23 words + another word will have the same probability of being valid? Nice job on implementation, documentation, and automated tests! |
Answering my second question above with brute-force -- might add "and ignorance" ;) import random
from embit import bip39
tries = 0
found = 0
while tries < 2**24:
tries += 1
words24 = random.choices(bip39.WORDLIST, k=24)
if (
bip39.mnemonic_is_valid(" ".join(words24[:12]))
and bip39.mnemonic_is_valid(" ".join(words24[12:]))
and bip39.mnemonic_is_valid(" ".join(words24))
):
print(f"double: \"{' '.join(words24)}\" found on try: {tries}")
found += 1
if found:
print(f"found {found} / tried {tries}: 1/{tries/found}")
else:
print(f"found none, tried {tries}") Which reported:
...while I was expecting to find 254; It's "in the ballpark" to me. |
can't believe you've used embit version to do this brute-force 🤣 |
src/krux/input.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please let "validate_position" default to True, this is default behavior.
Great work! |
Thank you! |
Storing notes about my findings of I had originally considered submitting a pr to embit with these changes (even though I find it arguably scandalous for an anonymous contributor to rewrite such a well reviewed/tested-in-production/stable function from a well respected developer) but I needed testing/measuring of the performance differences on at least a small set of hardware devices. There are two major differences in the implementation of
The rest of the changes in krux.bip39.mnemonic_to_bytes() implementation might gain an extra 2-8% increase in performance, but not more than that on any of the devices I tested. Because embit is geared towards limited-resource micro-controllers, I will plan only to submit a "safe" pull-request to embit which addresses the first performance boost above via the But, I have good news. Both # this dict is more efficient than calling list.index(word)
class WordIndex(dict):
def index(self, word):
return self[word]
# if device has enough memory and is brute-forcing, then use a WordIndex object
wordindex = WordIndex({word: i for i, word in enumerate(bip39.WORDLIST)})
bip39.mnemonic_is_valid(mnemonic, wordlist=wordindex)
bip39.mnemonic_to_bytes(mnemonic, wordlist=wordindex) Notes related to my testing included below:
Since I still believe the rewrite is scandalous and heresy, I'll plan instead to pr the "safe" |
The majority of touch handling code necessitates a valid position. Failing to provide one can result in unpredictable behavior, which is why I consider position validation to be default behavior. |
Returned the previous behavior to input |
Thank you! |
PR to fulfill this issue: #318
What is the purpose of this pull request?