Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node from stream backrefs optimisation #532

Open
wants to merge 61 commits into
base: main
Choose a base branch
from

Conversation

matt-o-how
Copy link

Use a Vec<NodePtr> stack instead of NodePtr / SExps in node_from_stream_backrefs and add a new traverse_path_with_vec() function to handle backrefs

@matt-o-how matt-o-how requested a review from arvidn January 13, 2025 16:18
Copy link

coveralls-official bot commented Jan 13, 2025

Pull Request Test Coverage Report for Build 13076959208

Details

  • 147 of 147 (100.0%) changed or added relevant lines in 2 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.2%) to 94.061%

Totals Coverage Status
Change from base Build 12933937517: 0.2%
Covered Lines: 6208
Relevant Lines: 6600

💛 - Coveralls

Copy link
Contributor

@arvidn arvidn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks correct, as far as I can tell. I think we need tests for all interesting cases, to make sure it works. I'm also interested in seeing a benchmark. Does this make a difference? I would expect it to at least use less memory, which typically means faster on small machines (like Raspberry PI)

@@ -72,6 +72,83 @@ pub fn traverse_path(allocator: &Allocator, node_index: &[u8], args: NodePtr) ->
Ok(Reduction(cost, arg_list))
}

pub fn traverse_path_with_vec(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be good to have unit tests for this function

@@ -22,7 +22,7 @@ pub fn node_from_stream_backrefs(
f: &mut Cursor<&[u8]>,
mut backref_callback: impl FnMut(NodePtr),
) -> io::Result<NodePtr> {
let mut values = allocator.nil();
let mut values = Vec::<NodePtr>::new();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one idea I had was that you could make this Vec<(NodePtr, Option<NodePtr>)>, where the optional NodePtr is a cache of nodes you've created for this stack "link", in case there are multiple references to the same one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is required to avoid consensus problems. consider the worst case scenario where you have a tree that's then referenced thousands of times. With todays deserializer, they will point to the same tree, but in your version, you'll create the same pairs over and over again. In fact, I would expect the fuzzer to fail if you let it run long enough because of this.

// find first non-zero byte
let first_bit_byte_index = first_non_zero(node_index);

let mut cost: Cost = TRAVERSE_BASE_COST
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this version doesn't need to track cost, I don't think. In fact, I think this is sufficiently different (and specialized) that it makes sense to move it into the de_br.rs file.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

let mut bitmask = 0x01;

// if we move from parsing the Vec stack to parsing the SExp stack use the following variables
let mut parsing_sexp = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be simpler to have a separate loop in the beginning that just reads 1-bits (we're still on the Vec-stack), until it hits a 0-bit (we select a stack item), and then moves into the next loop that only considers nodes in the allocator.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the change I had to make for parsing in the case of the stack being empty I don't believe we should make this change

) -> Response {
// the vec is a stack so a ChiaLisp list of (3 . (2 . (1 . NIL))) would be [1, 2, 3]
// however entries in this vec may be ChiaLisp SExps so it may look more like [1, (2 . NIL), 3]
let mut arg_list: Vec<NodePtr> = args.to_owned();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be more efficient to just keep an index into args, rather than cloning it

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

src/traverse_path.rs Outdated Show resolved Hide resolved
Copy link
Contributor

@arvidn arvidn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these things are still needed:

  • preserve the existing function, partly to control when we switch over to the new one, and also to be able to test that both behave the same
  • ensure the new function produce the same result as the old one, e.g. with a fuzzer.
  • ensure the new function behave the same with regards to limits to the number of pairs created by Allocator. It can be tested in a fuzzer by building with the counters build feature
  • benchmark to demonstrate that this is an improvement (this should probably be done early, as we might want to scrap this idea if it doesn't carry its weight)
  • survey the mainnet and testnet blockchains to see if back references into the parse-stack eveer exists in the wild
  • unit tests for all edge cases

@matt-o-how matt-o-how force-pushed the node_from_stream_backrefs_optimisation branch from 166b35f to cb47c16 Compare January 17, 2025 10:09
@matt-o-how matt-o-how force-pushed the node_from_stream_backrefs_optimisation branch from cb47c16 to 17f7c09 Compare January 27, 2025 16:53
@@ -83,6 +129,86 @@ pub fn node_from_bytes_backrefs_record(
Ok((ret, backrefs))
}

pub fn traverse_path_with_vec(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a pretty big function for not having a unit test.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a unit test

@@ -212,7 +212,7 @@ jobs:
cd tools
cargo run --bin generate-fuzz-corpus
- name: build
run: cargo fuzz list | xargs -I "%" sh -c "cargo fuzz run % -- -max_total_time=30 || exit 255"
run: cargo fuzz list | xargs -I "%" sh -c "cargo fuzz run % -- -max_total_time=30 || exit 255 --all-features"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you make the fuzzer depend on clvmr/counters I don't think you would need this

fuzz/Cargo.toml Outdated Show resolved Hide resolved
Ok(r) => r,
};

let b1 = node_to_bytes_backrefs(&allocator, program).unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need this step. Also, you may loose interesting cases by "filtering" the tree when you re-encode it. It won't necessarily round-trip.
Is there a good reason to do this rather than using data directly?

let mut allocator = Allocator::new();
let mut allocator_old = Allocator::new();

let mut allocator = Allocator::new();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line looks like a mistake

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

let mut allocator = Allocator::new();
let program = node_from_bytes_backrefs(&mut allocator, &b1).unwrap();
let program_old = node_from_bytes_backrefs_old(&mut allocator_old, &b1).unwrap();
assert!(allocator.pair_count() <= allocator_old.pair_count());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another important test (that I believe would fail right now) is to deserialize bytes again, using both old and new functions and an allocator that has a pair-limit of allocator_old.pair_count() - 1. i.e. it should run out of memory, and so should the new function. Otherwise consensus rules will have changed, and we have a potential for a chain split.

I believe the simplest way to implement this is to have a function on Allocator to create a "fake" pair. I just needs to decrement its limit. It also means checkpoint() and restore_checkpoint() would need to track and update this limit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be great to also ensure that the deseialization yields the same tree. You can use node_eq (see other fuzzers). This precludes using a single Allocator though, as I suggested earlier.

@@ -22,7 +22,7 @@ pub fn node_from_stream_backrefs(
f: &mut Cursor<&[u8]>,
mut backref_callback: impl FnMut(NodePtr),
) -> io::Result<NodePtr> {
let mut values = allocator.nil();
let mut values = Vec::<NodePtr>::new();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is required to avoid consensus problems. consider the worst case scenario where you have a tree that's then referenced thousands of times. With todays deserializer, they will point to the same tree, but in your version, you'll create the same pairs over and over again. In fact, I would expect the fuzzer to fail if you let it run long enough because of this.

if parsing_sexp {
match allocator.sexp(sexp_to_parse) {
SExp::Atom => {
return Err(EvalErr(sexp_to_parse, "path into atom".into()).into());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could make this error message better now I think. We know it's a backref and that the serialization is invalid

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed error message to: "invalid backreference during deserialisation"

src/serde/de_br.rs Outdated Show resolved Hide resolved
// if we move from parsing the Vec stack to parsing the SExp stack use the following variables
let mut sexp_to_parse = NodePtr::NIL;

while byte_idx > first_bit_byte_index || bitmask < last_bitmask {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you explore making this two loops, one for the Vec stack and one for traversing the SExp tree?
I think it would be easier to follow. partly because you wouldn't need two loop bodies separated by the parsing_sexp check. It would also place the tail of this function in between those loops. If we exit after having pointed to a stack node

@matt-o-how matt-o-how force-pushed the node_from_stream_backrefs_optimisation branch from cf0be53 to 864ef16 Compare January 30, 2025 12:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants