Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add testability features to re_log #6450

Open
abey79 opened this issue May 28, 2024 · 2 comments
Open

Add testability features to re_log #6450

abey79 opened this issue May 28, 2024 · 2 comments
Labels
🔨 testing testing and benchmarks

Comments

@abey79
Copy link
Member

abey79 commented May 28, 2024

At a minimum, there should be a re_log::setup_test_logging() function to be used by tests, such that any warning or error panics (these should be considered test failures).

Ideally, there should be additional feature to, e.g., assert that a warning/error with some content is logged under some circumstances:

// panics if:
// - test code logs a warning that doesn't contain "a bad thing has happened"
// - test code DOES NOT log a warning that contains "a bad thing has happened"
re_log::assert_warning("a bad thing has happened", || {
    /* test code */
};
@abey79 abey79 added the 🔨 testing testing and benchmarks label May 28, 2024
abey79 added a commit that referenced this issue May 28, 2024
### What

- Make the `render_context` field of `ViewerContext` an `Option` to make
it easier to crate a `ViewerContext` in a test environment.
- Introduce a (very basic) test helper that creates a `ViewerContext`
for use in unit test.
- Demo unit test that attempts to run `SelectionPanel::show_panel()`

Chained to #6431

There are many improvements that could be added:
- similar support for `ViewportBlueprint`
- make `re_log::warn/err` assert:
#6450
- add support for easily populating the stores with data
- benchmarking support?
- etc. etc.

### Checklist
* [x] I have read and agree to [Contributor
Guide](https://github.com/rerun-io/rerun/blob/main/CONTRIBUTING.md) and
the [Code of
Conduct](https://github.com/rerun-io/rerun/blob/main/CODE_OF_CONDUCT.md)
* [x] I've included a screenshot or gif (if applicable)
* [x] I have tested the web demo (if applicable):
* Using examples from latest `main` build:
[rerun.io/viewer](https://rerun.io/viewer/pr/6432?manifest_url=https://app.rerun.io/version/main/examples_manifest.json)
* Using full set of examples from `nightly` build:
[rerun.io/viewer](https://rerun.io/viewer/pr/6432?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json)
* [x] The PR title and labels are set such as to maximize their
usefulness for the next release's CHANGELOG
* [x] If applicable, add a new check to the [release
checklist](https://github.com/rerun-io/rerun/blob/main/tests/python/release_checklist)!

- [PR Build Summary](https://build.rerun.io/pr/6432)
- [Recent benchmark results](https://build.rerun.io/graphs/crates.html)
- [Wasm size tracking](https://build.rerun.io/graphs/sizes.html)

To run all checks from `main`, comment on the PR with `@rerun-bot
full-check`.
@emilk
Copy link
Member

emilk commented Jun 7, 2024

You can set the RERUN_PANIC_ON_WARN env-var for this

@abey79
Copy link
Member Author

abey79 commented Jun 7, 2024

TIL. Nice!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🔨 testing testing and benchmarks
Projects
None yet
Development

No branches or pull requests

2 participants