-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Block transform stores more data than needed #11
Comments
Ah, no, it seems that upon decoding, the very last block has to be kept around until the stream finishes to apply unpadding. We can only decide whether a block is the last one when either more data arrives or the stream is finished. And we must not unpad blocks before the end. Still just const remain = (data.length % blockSize) || blockSize; should suffice, i.e. preventing the |
Padding is required in block encryption, even when data perfectly fits the block size, because padding schemes ensure that decryption can distinguish between data and padding. Without padding, there would be ambiguity during decryption, especially if the data matches the block size exactly. The padding bytes indicate how much padding to remove, making the decryption process reliable. If you have still concerns about this issue, please write a test case to reproduce and verify this condition. The test should simulate specific state or condition and confirm whether the issue occurs as described. This will help determine if the error is legitimate and if further action is needed. |
EDIT: Fixed a typo in the first version of the script. Sorry, it appears I wasn't quite clear what my concern is. It's not about correctness per se but about efficiency in a certain sense. So it's not an error. Let me try give a smallish example: const {Transform} = require('node:stream');
const {default: {algorithm, mode}, padding, createDecryptStream, createEncryptStream} = require('cryptian');
const des = new algorithm.Des();
des.setKey(Buffer.alloc(8));
const encrypter = new mode.cbc.Cipher(des, Buffer.alloc(8));
const decrypter = new mode.cbc.Decipher(des, Buffer.alloc(8));
const encryptStream = createEncryptStream(encrypter, padding.Iso7816);
const decryptStream = createDecryptStream(decrypter, padding.Iso7816);
encryptStream.pipe(
new Transform({
transform(chunk, encoding, callback) {
console.log('encrypted: ', chunk);
callback(null, chunk);
},
}),
).pipe(decryptStream);
decryptStream.on('data', (chunk) => {
console.log('decrypted: ', chunk);
});
decryptStream.on('finish', (chunk) => {
console.log('finished');
});
console.log('Encrypting buffer 1');
encryptStream.write(Buffer.alloc(8, 0));
console.log('Encrypting buffer 2');
encryptStream.write(Buffer.alloc(8, 1));
console.log('Encrypting buffer 3');
encryptStream.write(Buffer.alloc(8, 2));
console.log('Encrypting buffer 4');
encryptStream.write(Buffer.alloc(8, 3));
console.log('Encrypting buffer 5');
encryptStream.write(Buffer.alloc(8, 4));
console.log('\nEnding encryption');
encryptStream.end(); This gives the following output:
Now with: --- a/src/transform/block.ts
+++ b/src/transform/block.ts
@@ -30,7 +30,7 @@ export class Block extends Transform {
data = Buffer.concat([this._tail, data]);
- const remain = blockSize + ((data.length % blockSize) || blockSize);
+ const remain = (data.length % blockSize) || blockSize;
const align = data.length > remain ? data.length - remain : 0;
this._tail = data.slice(align); The output becomes:
Note that the output data is exactly the same, but earlier in some sense of the word. So IMHO, with that patch, the amount of data in flight is reduced while maintaining correctness. (At least In fact, I think one could use const remain = n*blockSize + ((data.length % blockSize) || blockSize); for any non-negative integer |
I think the problem is solved by referenced branch. Block shifting is only necessary on decrypt stream and I separated implementation of block shifting.
I may not fully trust padding algorithms when the padding data is an empty buffer, but they work fine in this case because I wrote unit tests to cover this condition. |
Yes, that's exactly what I had been thinking of! 👍
That's a valid a reason indeed 😄 And thinking about this: For the null and space padding, an empty last buffer could result in either padding with a blocksize full of nulls (or spaces), or no padding at all. If I read the code correctly, you're doing the former. Intuitively, I would have expected the latter, but there may be different trade-offs involved. So I'm not necessarily suggesting to change this. I just meant to point out it might be worth thinking about this case. And anyway, it's related but orthogonal to this issue. Similarly, for null and space padding, when decrypting, output could be pushed more aggressively, as non-null (or non-space) data cannot by definition be part of the padding. So it would not be necessary to hold on to the last block if the corresponding plaintext doesn't end in a null (or space) byte. But I'm not sure it's worth it, as that sounds like it would need some larger changes for very limited profit. |
I merged the branch and published to the NPM. Your advice can be implemented as an option to spaces and nulls padding modules. Complete skipping is not a good idea of empty buffer because some sort of implementations may expect always have any null or space bytes of stream and can consider as the data broken if there is no null or space byte. Maybe there would be written the best approach on RFCs but I couldn't find any information when I quickly search. |
Consider
cryptian/src/transform/block.ts
Lines 33 to 34 in 1393bc5
Here
remain
is the amount of data to store and combine with the next chunk once it arrives, andalign
is the amount of data to process in this_transform
call. I don't see why it should ever be necessary to store more than oneblockSize
for later. Doesn't it suffice to do? Not that the current code is wrong - after the final flush, everything sorts out - but it adds unnecessary latency which may prevent this from working in a duplex communication scenario. (I.e. I'm waiting to receive an answer to my last request before sending the next request, but my last request or the last answer are "stuck" in the transform.)
The text was updated successfully, but these errors were encountered: