Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a benchmark for report_json parsing #17

Merged
merged 1 commit into from
Aug 1, 2024
Merged

Conversation

Swatinem
Copy link
Collaborator

This currently yields the following numbers for me locally:

pyreport           fastest       │ slowest       │ median        │ mean          │ samples │ iters
├─ complex_report  452.4 ms      │ 489.2 ms      │ 465 ms        │ 466.9 ms      │ 10      │ 10
╰─ simple_report   17.27 µs      │ 224 µs        │ 18.12 µs      │ 19.79 µs      │ 25076   │ 25076

@Swatinem Swatinem requested a review from matt-codecov July 31, 2024 11:29
@Swatinem Swatinem self-assigned this Jul 31, 2024
Copy link

codecov bot commented Jul 31, 2024

The author of this PR, Swatinem, is not an activated member of this organization on Codecov.
Please activate this user on Codecov to display this PR comment.
Coverage data is still being uploaded to Codecov.io for purposes of overall coverage calculations.
Please don't hesitate to email us at [email protected] with any questions.

Copy link
Collaborator

@matt-codecov matt-codecov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks much better than my "just run it 3 times with /usr/bin/time -plh and eyeball it" approach lol

looks like divan has a custom allocator for profiling memory usage https://docs.rs/divan/0.1.14/divan/struct.AllocProfiler.html which is nice for us. those should maybe be separate benchmarks though since i think instrumenting the allocator can destroy runtime?

thank you for setting this up for us! this will be great

pyreport = []

[profile.release]
debug = 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it enough to have this in bench or do we want this in release too?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

release would sure be nice, that way any panic that might be caught on the PyO3 layer gets a proper stack trace as well.

@Swatinem
Copy link
Collaborator Author

Swatinem commented Aug 1, 2024

I can sure give divans AllocProfiler a shot, thus far when I tried it, its output was rather messy. But maybe the reason was that I was rather benchmarking stuff that had hardly any allocations at all.

@Swatinem Swatinem force-pushed the swatinem/bench-parsing branch from 04024f7 to 7afa7eb Compare August 1, 2024 07:17
@Swatinem Swatinem merged commit be597b1 into main Aug 1, 2024
7 checks passed
@Swatinem Swatinem deleted the swatinem/bench-parsing branch August 1, 2024 07:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants