This repository has been archived by the owner on Nov 1, 2023. It is now read-only.
Fail to test run when user/kernel test specification mismatch #20
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hybrid tests are great, but manually maintaining the mapping of user/kernel tests is hard.
We've seen scenarios in which user space test were left orphaned/unmatched after an innocent refactor to the kernel side.
Our expectation were for KTF to detect this and fail CI, rather than ignoring this issue.
What happened instead was that KTF decided to skip those problematic test suites entirely, which caused an additional unwanted effect: later test would be executed under the wrong name (not sure why).
The way in which User-Kernel communicate over netlink is wonderful, and I found no parsing error.
The function KernelTestMgr::get_test_names() was able to detect this exact scenario, only the current problem resolution is strange. Why skip?
What puzzled me the most was "second.test_names.size() == 0", which means that the call to the log would be done only on empty test suites, while the log itself refers to the opposite. (Our errors were skipped silently as a result)
Our suggested change is to send a helpful log message for each the problematic tests, and then exit immediately. Tested locally that it works, but perhaps it's been fixed since I'm running an outdated version.