Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: parametrize bench mark tests #4974

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

cm-iwata
Copy link
Contributor

@cm-iwata cm-iwata commented Dec 30, 2024

In the previous implementation, it was necessary to adjust the timeout value every time a benchmark test added.
By parametrizing the benchmark tests, the time required for each test becomes predictable, eliminating the need to adjust the timeout value

Changes

Parametrize the test by the list of criterion benchmarks.

By parametrizing the tests, git clone will executed for each parameter in here.

with TemporaryDirectory() as tmp_dir:
dir_a = git_clone(Path(tmp_dir) / a_revision, a_revision)
result_a = test_runner(dir_a, True)
if b_revision:
dir_b = git_clone(Path(tmp_dir) / b_revision, b_revision)
else:
# By default, pytest execution happens inside the `tests` subdirectory. Pass the repository root, as
# documented.
dir_b = Path.cwd().parent
result_b = test_runner(dir_b, False)
comparison = comparator(result_a, result_b)
return result_a, result_b, comparison

To run all parametrized tests with single git close would require major revisions to git_ab_test, so this PR does not address that issue.

Reason

close #4832

License Acceptance

By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following Developer
Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md.

PR Checklist

  • I have read and understand CONTRIBUTING.md.
  • I have run tools/devtool checkstyle to verify that the PR passes the
    automated style checks.
  • I have described what is done in these changes, why they are needed, and
    how they are solving the problem in a clear and encompassing way.
  • I have updated any relevant documentation (both in code and in the docs)
    in the PR.
  • I have mentioned all user-facing changes in CHANGELOG.md.
  • If a specific issue led to this PR, this PR closes the issue.
  • When making API changes, I have followed the
    Runbook for Firecracker API changes.
  • I have tested all new and changed functionalities in unit tests and/or
    integration tests.
  • I have linked an issue to every new TODO.

  • This functionality cannot be added in rust-vmm.

In the previous implementation, it was necessary to adjust the timeout
value every time a benchmark test added.
By parametrizing the benchmark tests, the time required for each test
becomes predictable, eliminating the need to adjust the timeout value

Signed-off-by: Tomoya Iwata <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Parametrize test_benchmarks.py test by criterion benchmarks
1 participant