Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only output failing models and violated rules per default in HumanReadableFormatter #77
Only output failing models and violated rules per default in HumanReadableFormatter #77
Changes from 5 commits
ce39962
16f8079
321a846
1cd404d
7fd5b74
94a1c69
b752a41
48abfc1
512b04f
f2b02a1
0f8bfd8
0b1dc0b
3928d56
667908f
583cc6b
ff241de
3c9df1d
ce49628
825c48b
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is not 100% correct 🤔 If I understand correctly we want to print the following output if:
score.value < self._config.fail_any_model_under
. Then, only show the failing rules. Now it will also show the failing rules of models that did not failThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point! My line of thought here was, that in case that a project as whole fails and only very few model scores are too low , I probably would be interested as a user to also see failing rules of all models: Imagine you have 100 models, only ~5 fail but ~20 have lowish scores while 75 are perfect. In that case it could be of interest to also see the failing rules of all models.
I also referred to this in the issue discussion: #71 (comment). But I guess in that case one could just increase
fail_any_model_under
. Probably this is a bit too implicit and I could remove this (e.g only test for score.value < self._config.fail_any_model_under).Just let me know which way you prefer and then I adjust it accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I got it! 👍 I think for the scenario you describe it is indeed useful to be able to show all the failing rules and we can definitely leave that as the default. I do think that we should also have the option to show only failing models, with their failing rules.
Maybe we should have two flags:
--show-all-rules
and--show-all-models
so the user is able to further specify the output. @matthieucan curious to hear your opinion as well!So then the user is able to:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good to align on those expectations indeed.
Maybe the
--show
parameter could take an argument, e.g.--show all
- show all models, all rules--show failing-models
- show failing rules of failing models--show failing-rules
- show failing rules of all models (the default?)I'm not sure if the first scenario mentioned (show failing models, with all rules) is useful, considering the option to use
--select
in combination. For example--select my_model --show all
might be more actionable. But let me know what you think :)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right that option 1 (failing models with all rules) is probably not very useful. I think the direction of
--show something
would be a nice one. It's indeed simpler than providing two flags. And agreed that--show failing-rules
should be the default!There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's simpler to remember for users if the same term is used for the CLI options, as it's an abstraction over the code base
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @thomend! Will you be able to continue on this PR sooner or later? We can assist if needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @jochemvandooren sorry not picking it up earlier! I initially planned to work on it much sooner again. I am picking it up today evening again and will come back to you asap then - hope that works for you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just added the different options as discussed above. I also took care of the merge conflicts. Tests run through + precommit hooks as well.
The
show
parameter takes the following options now:--show all
--show failing-models
--show failig-rules
(default)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem @thomend ! Was afraid you had forgotten 😢 I will review tomorrow!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a test for
failing-models
as well?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a parametrized test to test for the different options in the
show
parameter - hope this works :)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did all the B's turn into b? 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hexadecimal is not case sensitive, but indeed strange to see those changed 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for that! I am using the ruff vscode extension and the autoformat on save did that:
Hex codes and Unicode sequences. Although I don't know why the linter during pre-commit would not pick it up to revert it? The ruff version of the vscode extension is 0.6.6, I believe (which is newer than the one running in the pre-commit hook.
Let me know if you want it reverted.