Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NUT-13 specify keyset ID integer size: 32 bits #189

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

vnprc
Copy link

@vnprc vnprc commented Nov 12, 2024

Is the intention of this integer representation of keyset ID to fit it into 32 bits? I ran into this implementation in cdk that produces a u64 output which seems wrong to me.

https://github.com/cashubtc/cdk/blob/main/crates/cdk/src/nuts/nut02.rs#L117

impl TryFrom<Id> for u64 {
    type Error = Error;
    fn try_from(value: Id) -> Result<Self, Self::Error> {
        let hex_bytes: [u8; 8] = value.to_bytes().try_into().map_err(|_| Error::Length)?;

        let int = u64::from_be_bytes(hex_bytes);

        Ok(int % (2_u64.pow(31) - 1))
    }
}

I am opening this PR to get some clarity.

@@ -42,7 +42,7 @@ The wallet starts with `counter_k := 0` upon encountering a new keyset and incre

#### Keyset ID

The integer representation `keyset_id_int` of a keyset is calculated from its [hexadecimal ID][02] which has a length of 8 bytes or 16 hex characters. First, we convert the hex string to a big-endian sequence of bytes. This value is then modulo reduced by `2^31 - 1` to arrive at an integer that is a unique identifier `keyset_id_int`.
The 32 bit integer representation `keyset_id_int` of a keyset is calculated from its [hexadecimal ID][02] which has a length of 8 bytes or 16 hex characters. First, we convert the hex string to a big-endian sequence of bytes. This value is then modulo reduced by `2^31 - 1` to arrive at an integer that is a unique identifier `keyset_id_int`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am noticing something else here that is not per-se an issue but it's awkward: why reduce by 2^31-1 instead of 2^31? But I think I know why:

q % 2^31 == q & (2^31-1). Normally when reducing modulo a power of 2 you can skip division and just use a mask to get the desired bits. Whoever wrote this first must have confused this or made a typo.
Now we can't change this back without breaking the protocol but I thought it was funny.

Copy link
Contributor

@callebtc callebtc Dec 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each extended key has 2^31 normal child keys, and 2^31 hardened child keys. Each of these child keys has an index. The normal child keys use indices 0 through 2^31-1. The hardened child keys use indices 2^31 through 2^32-1.

https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki

@prusnak
Copy link
Collaborator

prusnak commented Nov 12, 2024

NACK. This is implementation specific (some languages do not have 32-bit int) and the intention is obvious from the provided examples.

@vnprc
Copy link
Author

vnprc commented Nov 12, 2024

I disagree that it's obvious. I opened this PR because I found the language confusing. It seems to assume the term 'integer' means a 32 bit number. The python example is clear to me but the javascript example uses BigInt() twice to arrive at a number that fits into a regular int.

I think the language does a good job explaining that the input is 8 bytes or 16 hex chars but does not explicitly say the size of the output. The size of the container of the function output is an implementation specific detail but it would be helpful to explain that the output of this function fits into 32 bits or 4 bytes or 8 hex chars.

@prusnak would this language be better?

The integer representation keyset_id_int of a keyset is calculated from its hexadecimal ID which has a length of 8 bytes or 16 hex characters. First, we convert the hex string to a big-endian sequence of bytes. This value is then modulo reduced by 2^31 - 1 to arrive at a unique identifier keyset_id_int that can be stored in 4 bytes or 8 hex chars.

@clarkmoody
Copy link

I like the idea of specifying that the keyset ID should be able to fit into a 32-bit integer. Maybe the spec should just constraint the range of valid values?

@prusnak
Copy link
Collaborator

prusnak commented Nov 12, 2024

... that can be stored in 4 bytes or 8 hex chars.

The value is not stored anywhere - it is just used as an input to BIP32 child key derivation (CKD) function. Therefore I find it irrelevant whether the value is stored in 32-bit int, 64-bit int, 4 bytes etc. Because all that matters is what is the data type that CKD is expecting (and yeah, in statically typed languages it is usually uint32, but can be literally any int type).

@vnprc
Copy link
Author

vnprc commented Nov 12, 2024

Ok, sure. My goal in suggesting this change is to make it more clear to devs implementing this spec.

@callebtc
Copy link
Contributor

callebtc commented Dec 3, 2024

I agree with most that has been said here, so I'm undecided. Feel free to chime in, if there is more ACKs than NACKs, let's merge this.

@callebtc callebtc changed the title nut13 specify keyset ID integer size: 32 bits NUT-13 specify keyset ID integer size: 32 bits Dec 3, 2024
@callebtc callebtc added the needs discussion Needs more discussion label Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs discussion Needs more discussion
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants