-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate selector tests to pytest #10282
Migrate selector tests to pytest #10282
Conversation
We have a globally available `manifest` fixture in our unit tests. In the coming commits we're going to add tests to the file which use the gloablly available `manifest` fixture. Prior to this commit, the locally defined `manifest` fixture was taking precidence. To get around this, the easiest solution was to rename the locally defined fixture. I had tried to isolate the locally defined fixture by moving it, and the relevant tests to a test class like `TestNodeSelector`. However because of _how_ the relevant tests were parameterized, this proved difficult. Basically for readability we define a variable which holds a list of all the parameterization variables. By moving to a test class, the definition of the variables would have had to be defined directly in the parameterization macro call. Although possible, it made the readability slighty worse. It might be worth doing anyway in the long run, but instead I used a less heavy handed alternative (already described)
…in unit tests The `Compiler.compile` method accesses `self.config.args.which`. The `config` is the `RuntimeConfig` the `Compiler` was instantiated with. Our `runtime_config` fixture was being instatiated with an empty dict for the `args` property. Thus a `which` property of the args wasn't being made avaiable, and if `compile` was run a runtime error would occur. To solve this, we've begun instantiating the args from the global flags via `get_flags()`. This works because we ensure the `set_test_flags` fixture is run first which calls `set_from_args`.
… use pytesting methodology
We had some tests in `test_selector.py::GraphTest` that didn't add anything ontop of what was already being tested else where in the file except the parsing of models. However, the tests in `test_parser.py::ModelParserTest` cover everything being tested here (and then some). Thus these tests in `test_selector.py::GraphTest` are unnecessary and can be deleted.
There was a test `test__partial_parse` in `test_selector.py` which tested the functionality of `is_partial_parsable` of the `ManifestLoader`. This doesn't really make sense to exist in `test_selector.py` where we are testing selectors. We test the `ManifestLoader` class in `test_manifest.py` which seemed like a more appropriate place for the test. Additionally we renamed the test to `test_is_partial_parsable_by_version` to more accurately describe what is being tested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have two optional nits otherwise looks good to me!
@@ -82,7 +68,7 @@ def graph(): | |||
|
|||
|
|||
@pytest.fixture | |||
def manifest(graph): | |||
def mock_manifest(graph): | |||
return _get_manifest(graph) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be more specific here that we are only mocking the graph functionality part?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I can change the name to something like manifest_with_mocked_graph
.
) | ||
expected_nodes = [model.unique_id for model in models] | ||
assert linker.nodes() == set(expected_nodes) | ||
assert list(linker.edges()) == [tuple(expected_nodes)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitty Nit: It is better to just write out the actual values in the test if it is not too complex. This way reader can understand this edges will yield a list of string without knowing what model.unique_id
(which is the implementation detail of model
) is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would a compromise of adding a type hint be sufficient? For example:
expected_nodes: List[str] = [model.unique_id for model in models]
My issue with hand writing it out is that I find that it doesn't scale well and makes things more brittle. From a perspective of making it easier to write tests, I think this approach will suit us better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should have added this point in the last comment.
So in some way, we are writing out some form of implementation in the tests(it is fairly small in this case). IMHO unit tests should be defining input and output explicitly without (or as little as possible) dependencies between them.
I don't think writing out an explicit value makes writing tests harder.
This is not a very strongly held opinion and I am happy that you make the final call here. We can always adjust later on if we find out more.
@@ -963,25 +968,58 @@ def macros( | |||
|
|||
|
|||
@pytest.fixture | |||
def unit_tests(unit_test_table_model) -> list: | |||
def unit_tests(unit_test_table_model) -> List[UnitTestDefinition]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all the type annotation!
In the test `tests/unit/graph/test_selector.py::TestCompiler::test_two_models_simple_ref` we have a variable `expected_nodes` that we are setting via a list comprehension. At a glance it isn't immediately obvious what `expected_nodes` actually is. It's a list, but a list of what? One suggestion was to explicitly write out the list of strings. However, I worry about the brittleness of doing so. That might be the way we head long term, but as a compromise for now, I've added a type hint the the variable definition.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #10282 +/- ##
==========================================
- Coverage 88.73% 88.65% -0.08%
==========================================
Files 180 180
Lines 22474 22495 +21
==========================================
+ Hits 19942 19944 +2
- Misses 2532 2551 +19
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
resolves #9868
Problem
In #9868 we wanted to ensure we write, and can write, some selector tests. After creating the ticket, we actually found some selector tests while we were reorganizing tests. Thus #9868 became about converting these tests to use pytest.
Solution
Converted selector tests to use pytest, showing that we can write selector tests
Checklist