Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More thorough benchmarks [$200] #131

Open
Krastanov opened this issue Jul 9, 2024 · 5 comments
Open

More thorough benchmarks [$200] #131

Krastanov opened this issue Jul 9, 2024 · 5 comments
Labels
bounty:200 bug bounty There is an award for solving this issue. good first issue Good for newcomers

Comments

@Krastanov
Copy link
Member

Krastanov commented Jul 9, 2024

Bug bounty logistic details (click to expand)

To claim exclusive time to work on this bounty either post a comment here or message [email protected] with:

  • your name
  • github username
  • (optional) a brief list of previous pertinent projects you have engaged in

If you want to, you can work on this project without making a claim, however claims are encouraged to give you and other contributors peace of mind. Whoever has made a claim takes precedence when solutions are considered.

You can always propose your own funded project, if you would like to contribute something of value that is not yet covered by an official bounty.

The project is claimed by @coderight1 until Aug 12th 2024.

Project: "More thorough benchmarks" [$200]

We have a small benchmark suite already implemented, which is executed as part of our CI runs. It is defined in the benchmark folder and reported for each pull request. We would like to expand this benchmark suite to include many more facets of this library. E.g. basic register operations using a variety of backends; queries and tags and locks on registers and channels; time to import; time to run examples. The new benchmarks should be legible, easy to follow, and organized by topic. Most of them should be microbenchmarks testing only one concept, but a few holistic benchmarks would make sense as well.

Required skills: Generic Julia skills.

Reviewer: Stefan Krastanov

Duration: 1 month

Payout procedure:

The Funding for these bounties comes from the National Science Foundation and from the NSF Center for Quantum Networks. The payouts are managed by the NumFOCUS foundation and processed in bulk once every two months. If you live in a country in which NumFOCUS can make payments, you can participate in this bounty program.

Click here for more details about the bug bounty program.

@Krastanov Krastanov added good first issue Good for newcomers bug bounty There is an award for solving this issue. bounty:200 labels Jul 9, 2024
@coderight1

This comment was marked as outdated.

@Krastanov

This comment was marked as outdated.

@coderight1

This comment was marked as outdated.

@Krastanov
Copy link
Member Author

Sounds good! A few suggestions:

  • Check out the benchmarks setup of QuantumClifford.jl for reference
  • Be careful with the eval and samples keywords: in some situations, especially if you are modifying/deleting things, you have to have eval=1, otherwise you get very misleading results
  • Make sure the code is relatively easy to execute interactively benchmark by benchmark (i.e. factor out neatly potential setup/prepartion routines so a dev who wants to experiment knows what they need to load in their REPL)
  • Have benchmarks both for large number of entities and for small number of entities (e.g. both large networks and small networks, large registers and registers with only a few slots, queries on large number of prerecorded tags and just a few tags)
  • Be careful with setting things up that might require the discrete event simulator. There are many functions that just "set up an event" and the event does not run until you execute the simulator. Particularly around locks. I would suggest grouping this separately from the tag/query topic.
  • For import time, it might be a bit awkward, because the benchmark will basically have to launch a separate julia instance. That also probably should have a limited number of eval and samples. Make sure that you actually wait on the newly started process (otherwise you might just launch it in the background and get a report that it is incredibly fast)
  • Testing duration of examples might also be difficult (this is actually why this project is 200$ instead of 50$). No need to test all of them, especially ones that contain plots are probably a bad idea. You can see in the test_examples how we run them for tests, that would probably be a good reference (and it will make it more obvious which ones involve plotting and should be skipped). These should probably have just one eval and one sample given how incredibly slow they can be. It would be important to run these in a separate module (e.g. the way they are done in the tests). I suspect you will need something like eval(Module(), :(include("example file"))) to run this benchmark.
  • Make sure that you also run each benchmark you define interactively (by changing @benchmarkable to @benchmark) just to see what would be the dev experience of someone who is running a single benchmark manually to explore why there is a slowdown in some future PR
  • avoid using loops to generate benchmark instances (e.g. if you are just changing a single parameter). Sure, that would lead to more repetitive code and duplication, but it will help when in the future someone wants to play with a specific benchmark

I will make this reserved for 1month, but we can extend if necessary. Thank you for taking this on.

@Krastanov

This comment was marked as outdated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bounty:200 bug bounty There is an award for solving this issue. good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants