-
-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert tests to pairs of Rust projects with their own manifests #144
Comments
I'm thinking a bit about how to redesign the existing testing environment structure. Currently the lints' queries are in I find it unintuitive (especially I thought so at the beginning, when I've tried to add my first lint). The projects implements a layer of abstraction, thanks to which a lint is just a The One could wonder what the The queries and the Rust crates are not a one-to-one relation -- one crate is sometimes used in multiple queries and probably after adding Manifest tests, some queries will use multiple crates. But nevertheless, they can be grouped together. I propose the following directory structure:
The code for running the tests would be rewritten in a way to pick up the directories and queries from the described format. It would also be beneficial to have a I see one potential drawback - one could think that the queries would be ran only on the corresponding crates from the same directory, but it could be overcome by describing in What is your opinion @obi1kenobi? |
I agree with the renaming of I have a few concerns:
I agree that lint files don't have any real dependency on the Rust code in
What happens if a lint flags code in more than one test project? How would we represent that? Also, consider the workflow of adding new test projects (e.g. after adding a new lint). If the test output data sits adjacent to the lint file itself, then if the new test project also triggers some of the existing lints (as well as the newly-added one), then that PR will need to modify the test output files in the existing lint directory. This makes it look like the PR is adding and modifying multiple lints at once, and doesn't make it clear that it's actually only one lint + some new test data (with new test outputs) that are added.
I think this is a major issue. In practice, people rarely read the README, especially when something seems intuitive and "clicks" in their mind. If I saw a query file and test case data in the same directory (and away from other similar query files and test data), it would absolutely "click" and I would 100% assume that's the only crate that is tested with that query. I feel strongly about not having the test data right adjacent to the lints because of this. A few other things to consider:
|
Well, to solve this issue I need to change a bit the testing environment and it's easier/faster to implement other structural changes at the same time. So it's a good idea to think about it now, because I'll have the full context and the will to make the changes, and I'm not sure whether there'll be a better time.
But then, it is possible to move out a file to the
What about renaming the
Good point. But I would expect the lints to be disjoint in the projects they detect. Aren't they?
I haven't fully understood your point. The lints won't be touched (besides the one added), but the outputs will be. Isn't that the same as with the current setup?
Yeah, good point. It probably isn't a good idea after all.
Makes sense. It would basically form a proof that this lint is needed. I could try implementing that. Should I? |
I guess the best time to test the witnesses would be in GitHub Actions? It doesn't seem important enough to test it during |
My gut feeling is that the directory with the lints should only contain lint files and nothing else (no test data etc.). Those lints should start off all being side-by-side in the same directory until we have a reason to add more nesting. This is also the direction I'm fine with renaming
Almost all of the time, they are. But in rare cases it can be quite difficult to write the test case in such a way that it only triggers a single lint. In general, we want adding lints to be as easy as possible for anyone, even new contributors. Say we made this assumption, and threw in an I think we should definitely test the witnesses somewhere. I'm not sure how long testing the witnesses would take, and it's possible it would take a relatively long time similar to the rustdoc JSON generation. If so, then we should have a script to test them outside of |
One more thing which I want to resolve -- what about moving the Besides that, I'm on the same page and I agree with your arguments :) |
The lints feel to me like they are source code: they certainly affect the execution of the program, and are even explicitly included (via Personally I'd leave them at But I don't have strong feelings about this and if you feel strongly that |
Currently, test data is generated from a single project which has a (complicated) way to trigger semver violations by enabling or disabling its features.
In addition to being complicated and non-obvious, this also prevents us from semver-checking manifest information (#48) since there's currently only a single manifest.
It would be an improvement to switch to a test structure where each query is associated with a pair of Rust projects, complete with
Cargo.toml
files and their own source files:Each of these two projects should have a crate name matching the query name, and the same version (e.g. 1.0.0).
To form a test case, one could create the "old" project, then
cp -R old new
and delete / alter code as necessary to cause the desired changes for the test. In this way, thenew
project is pretending to be the "next version" of theold
project.Running the tests would involve generating all the rustdoc JSON files as normal, and then asserting that
cargo-semver-checks
finds only the expected semver issues, with no false positives, in each pair of projects.A crate like
insta
might be helpful here as well, since this is essentially a collection of snapshot tests.The text was updated successfully, but these errors were encountered: