-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node from stream backrefs optimisation #532
base: main
Are you sure you want to change the base?
Conversation
Pull Request Test Coverage Report for Build 13076959208Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it looks correct, as far as I can tell. I think we need tests for all interesting cases, to make sure it works. I'm also interested in seeing a benchmark. Does this make a difference? I would expect it to at least use less memory, which typically means faster on small machines (like Raspberry PI)
src/traverse_path.rs
Outdated
@@ -72,6 +72,83 @@ pub fn traverse_path(allocator: &Allocator, node_index: &[u8], args: NodePtr) -> | |||
Ok(Reduction(cost, arg_list)) | |||
} | |||
|
|||
pub fn traverse_path_with_vec( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be good to have unit tests for this function
src/serde/de_br.rs
Outdated
@@ -22,7 +22,7 @@ pub fn node_from_stream_backrefs( | |||
f: &mut Cursor<&[u8]>, | |||
mut backref_callback: impl FnMut(NodePtr), | |||
) -> io::Result<NodePtr> { | |||
let mut values = allocator.nil(); | |||
let mut values = Vec::<NodePtr>::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one idea I had was that you could make this Vec<(NodePtr, Option<NodePtr>)>
, where the optional NodePtr
is a cache of nodes you've created for this stack "link", in case there are multiple references to the same one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is required to avoid consensus problems. consider the worst case scenario where you have a tree that's then referenced thousands of times. With todays deserializer, they will point to the same tree, but in your version, you'll create the same pairs over and over again. In fact, I would expect the fuzzer to fail if you let it run long enough because of this.
src/traverse_path.rs
Outdated
// find first non-zero byte | ||
let first_bit_byte_index = first_non_zero(node_index); | ||
|
||
let mut cost: Cost = TRAVERSE_BASE_COST |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this version doesn't need to track cost, I don't think. In fact, I think this is sufficiently different (and specialized) that it makes sense to move it into the de_br.rs
file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
src/traverse_path.rs
Outdated
let mut bitmask = 0x01; | ||
|
||
// if we move from parsing the Vec stack to parsing the SExp stack use the following variables | ||
let mut parsing_sexp = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might be simpler to have a separate loop in the beginning that just reads 1-bits (we're still on the Vec
-stack), until it hits a 0-bit (we select a stack item), and then moves into the next loop that only considers nodes in the allocator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the change I had to make for parsing in the case of the stack being empty I don't believe we should make this change
src/traverse_path.rs
Outdated
) -> Response { | ||
// the vec is a stack so a ChiaLisp list of (3 . (2 . (1 . NIL))) would be [1, 2, 3] | ||
// however entries in this vec may be ChiaLisp SExps so it may look more like [1, (2 . NIL), 3] | ||
let mut arg_list: Vec<NodePtr> = args.to_owned(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be more efficient to just keep an index into args
, rather than cloning it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these things are still needed:
- preserve the existing function, partly to control when we switch over to the new one, and also to be able to test that both behave the same
- ensure the new function produce the same result as the old one, e.g. with a fuzzer.
- ensure the new function behave the same with regards to limits to the number of pairs created by
Allocator
. It can be tested in a fuzzer by building with thecounters
build feature - benchmark to demonstrate that this is an improvement (this should probably be done early, as we might want to scrap this idea if it doesn't carry its weight)
- survey the mainnet and testnet blockchains to see if back references into the parse-stack eveer exists in the wild
- unit tests for all edge cases
166b35f
to
cb47c16
Compare
cb47c16
to
17f7c09
Compare
@@ -83,6 +129,86 @@ pub fn node_from_bytes_backrefs_record( | |||
Ok((ret, backrefs)) | |||
} | |||
|
|||
pub fn traverse_path_with_vec( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a pretty big function for not having a unit test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a unit test
.github/workflows/build-test.yml
Outdated
@@ -212,7 +212,7 @@ jobs: | |||
cd tools | |||
cargo run --bin generate-fuzz-corpus | |||
- name: build | |||
run: cargo fuzz list | xargs -I "%" sh -c "cargo fuzz run % -- -max_total_time=30 || exit 255" | |||
run: cargo fuzz list | xargs -I "%" sh -c "cargo fuzz run % -- -max_total_time=30 || exit 255 --all-features" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you make the fuzzer depend on clvmr/counters
I don't think you would need this
fuzz/fuzz_targets/deserialize_br.rs
Outdated
Ok(r) => r, | ||
}; | ||
|
||
let b1 = node_to_bytes_backrefs(&allocator, program).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you need this step. Also, you may loose interesting cases by "filtering" the tree when you re-encode it. It won't necessarily round-trip.
Is there a good reason to do this rather than using data
directly?
fuzz/fuzz_targets/deserialize_br.rs
Outdated
let mut allocator = Allocator::new(); | ||
let mut allocator_old = Allocator::new(); | ||
|
||
let mut allocator = Allocator::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this line looks like a mistake
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
fuzz/fuzz_targets/deserialize_br.rs
Outdated
let mut allocator = Allocator::new(); | ||
let program = node_from_bytes_backrefs(&mut allocator, &b1).unwrap(); | ||
let program_old = node_from_bytes_backrefs_old(&mut allocator_old, &b1).unwrap(); | ||
assert!(allocator.pair_count() <= allocator_old.pair_count()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another important test (that I believe would fail right now) is to deserialize bytes
again, using both old and new functions and an allocator that has a pair-limit of allocator_old.pair_count() - 1
. i.e. it should run out of memory, and so should the new function. Otherwise consensus rules will have changed, and we have a potential for a chain split.
I believe the simplest way to implement this is to have a function on Allocator
to create a "fake" pair. I just needs to decrement its limit. It also means checkpoint()
and restore_checkpoint()
would need to track and update this limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be great to also ensure that the deseialization yields the same tree. You can use node_eq
(see other fuzzers). This precludes using a single Allocator
though, as I suggested earlier.
src/serde/de_br.rs
Outdated
@@ -22,7 +22,7 @@ pub fn node_from_stream_backrefs( | |||
f: &mut Cursor<&[u8]>, | |||
mut backref_callback: impl FnMut(NodePtr), | |||
) -> io::Result<NodePtr> { | |||
let mut values = allocator.nil(); | |||
let mut values = Vec::<NodePtr>::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is required to avoid consensus problems. consider the worst case scenario where you have a tree that's then referenced thousands of times. With todays deserializer, they will point to the same tree, but in your version, you'll create the same pairs over and over again. In fact, I would expect the fuzzer to fail if you let it run long enough because of this.
src/serde/de_br.rs
Outdated
if parsing_sexp { | ||
match allocator.sexp(sexp_to_parse) { | ||
SExp::Atom => { | ||
return Err(EvalErr(sexp_to_parse, "path into atom".into()).into()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could make this error message better now I think. We know it's a backref and that the serialization is invalid
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed error message to: "invalid backreference during deserialisation"
// if we move from parsing the Vec stack to parsing the SExp stack use the following variables | ||
let mut sexp_to_parse = NodePtr::NIL; | ||
|
||
while byte_idx > first_bit_byte_index || bitmask < last_bitmask { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you explore making this two loops, one for the Vec
stack and one for traversing the SExp
tree?
I think it would be easier to follow. partly because you wouldn't need two loop bodies separated by the parsing_sexp
check. It would also place the tail of this function in between those loops. If we exit after having pointed to a stack node
cf0be53
to
864ef16
Compare
Co-authored-by: Arvid Norberg <[email protected]>
Co-authored-by: Arvid Norberg <[email protected]>
Use a
Vec<NodePtr>
stack instead ofNodePtr
/SExp
s innode_from_stream_backrefs
and add a newtraverse_path_with_vec()
function to handle backrefs