Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate selector tests to pytest #10282

Merged
merged 9 commits into from
Jun 10, 2024

Conversation

QMalcolm
Copy link
Contributor

@QMalcolm QMalcolm commented Jun 8, 2024

resolves #9868

Problem

In #9868 we wanted to ensure we write, and can write, some selector tests. After creating the ticket, we actually found some selector tests while we were reorganizing tests. Thus #9868 became about converting these tests to use pytest.

Solution

Converted selector tests to use pytest, showing that we can write selector tests

Checklist

  • I have read the contributing guide and understand what's expected of me
  • I have run this code in development and it appears to resolve the stated issue
  • This PR includes tests, or tests are not required/relevant for this PR
  • This PR has no interface changes (e.g. macros, cli, logs, json artifacts, config files, adapter interface, etc) or this PR has already received feedback and approval from Product or DX
  • This PR includes type annotations for new and modified functions

QMalcolm added 7 commits June 6, 2024 16:29
We have a globally available `manifest` fixture in our unit tests. In the
coming commits we're going to add tests to the file which use the gloablly
available `manifest` fixture. Prior to this commit, the locally defined
`manifest` fixture was taking precidence. To get around this, the easiest
solution was to rename the locally defined fixture.

I had tried to isolate the locally defined fixture by moving it, and the relevant
tests to a test class like `TestNodeSelector`. However because of _how_ the relevant
tests were parameterized, this proved difficult. Basically for readability we define
a variable which holds a list of all the parameterization variables. By moving to a
test class, the definition of the variables would have had to be defined directly in
the parameterization macro call. Although possible, it made the readability slighty
worse. It might be worth doing anyway in the long run, but instead I used a less heavy
handed alternative (already described)
…in unit tests

The `Compiler.compile` method accesses `self.config.args.which`. The `config`
is the `RuntimeConfig` the `Compiler` was instantiated with. Our `runtime_config`
fixture was being instatiated with an empty dict for the `args` property. Thus
a `which` property of the args wasn't being made avaiable, and if `compile` was run
a runtime error would occur. To solve this, we've begun instantiating the args from
the global flags via `get_flags()`. This works because we ensure the `set_test_flags`
fixture is run first which calls `set_from_args`.
We had some tests in `test_selector.py::GraphTest` that didn't add
anything ontop of what was already being tested else where in the file
except the parsing of models. However, the tests in `test_parser.py::ModelParserTest`
cover everything being tested here (and then some). Thus these tests
in `test_selector.py::GraphTest` are unnecessary and can be deleted.
There was a test `test__partial_parse` in `test_selector.py` which tested
the functionality of `is_partial_parsable` of the `ManifestLoader`. This
doesn't really make sense to exist in `test_selector.py` where we are
testing selectors. We test the `ManifestLoader` class in `test_manifest.py`
which seemed like a more appropriate place for the test. Additionally we
renamed the test to `test_is_partial_parsable_by_version` to more accurately
describe what is being tested.
@QMalcolm QMalcolm requested a review from a team as a code owner June 8, 2024 00:09
@cla-bot cla-bot bot added the cla:yes label Jun 8, 2024
@QMalcolm QMalcolm added Skip Changelog Skips GHA to check for changelog file and removed cla:yes labels Jun 8, 2024
Copy link
Contributor

@ChenyuLInx ChenyuLInx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have two optional nits otherwise looks good to me!

@@ -82,7 +68,7 @@ def graph():


@pytest.fixture
def manifest(graph):
def mock_manifest(graph):
return _get_manifest(graph)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be more specific here that we are only mocking the graph functionality part?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea I can change the name to something like manifest_with_mocked_graph.

)
expected_nodes = [model.unique_id for model in models]
assert linker.nodes() == set(expected_nodes)
assert list(linker.edges()) == [tuple(expected_nodes)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitty Nit: It is better to just write out the actual values in the test if it is not too complex. This way reader can understand this edges will yield a list of string without knowing what model.unique_id(which is the implementation detail of model) is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would a compromise of adding a type hint be sufficient? For example:

expected_nodes: List[str] = [model.unique_id for model in models]

My issue with hand writing it out is that I find that it doesn't scale well and makes things more brittle. From a perspective of making it easier to write tests, I think this approach will suit us better.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should have added this point in the last comment.
So in some way, we are writing out some form of implementation in the tests(it is fairly small in this case). IMHO unit tests should be defining input and output explicitly without (or as little as possible) dependencies between them.
I don't think writing out an explicit value makes writing tests harder.

This is not a very strongly held opinion and I am happy that you make the final call here. We can always adjust later on if we find out more.

@@ -963,25 +968,58 @@ def macros(


@pytest.fixture
def unit_tests(unit_test_table_model) -> list:
def unit_tests(unit_test_table_model) -> List[UnitTestDefinition]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the type annotation!

QMalcolm added 2 commits June 10, 2024 09:40
In the test `tests/unit/graph/test_selector.py::TestCompiler::test_two_models_simple_ref`
we have a variable `expected_nodes` that we are setting via a list comprehension.
At a glance it isn't immediately obvious what `expected_nodes` actually is. It's a
list, but a list of what? One suggestion was to explicitly write out the list of strings.
However, I worry about the brittleness of doing so. That might be the way we head long
term, but as a compromise for now, I've added a type hint the the variable definition.
@cla-bot cla-bot bot added the cla:yes label Jun 10, 2024
Copy link

codecov bot commented Jun 10, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 88.65%. Comparing base (b680c7a) to head (1f49618).
Report is 4 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #10282      +/-   ##
==========================================
- Coverage   88.73%   88.65%   -0.08%     
==========================================
  Files         180      180              
  Lines       22474    22495      +21     
==========================================
+ Hits        19942    19944       +2     
- Misses       2532     2551      +19     
Flag Coverage Δ
integration 85.92% <ø> (-0.17%) ⬇️
unit 61.73% <ø> (-1.52%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@QMalcolm QMalcolm merged commit bff116d into main Jun 10, 2024
62 checks passed
@QMalcolm QMalcolm deleted the qmalcolm--9868-migrate-selector-tests-to-pytest branch June 10, 2024 17:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla:yes Skip Changelog Skips GHA to check for changelog file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[SPIKE+] Write a unittest for a simple selection
2 participants