-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add fail_project_under and fail_any_model_under options #58
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution!
Left some tiny comments, already looks good 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super nice feature 🚀, works very well. I've added some nitpicky comments, just to align with the style we have 😅
Looks good to me! Just missing an entry in the CHANGELOG, could you add it? 🙇♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pre-approve from my side 🚀
@jochemvandooren as we've added a Breaking Change i.e. will fail if the default model score isn't met (5.0) this should bump the version to 1.0.0 based on ?
|
We're following semantic versioning, meaning if the major version is |
src/dbt_score/cli.py
Outdated
failed_models = { | ||
model.unique_id: score.value | ||
for model, score in evaluation.scores.items() | ||
if (score.value < config.fail_any_model_under) | ||
} | ||
|
||
if failed_models: | ||
print(f"Error: fail_any_model_under = {config.fail_any_model_under}") | ||
for model, value in failed_models.items(): | ||
print(f"Model {model} scored {round(value,1)}") | ||
ctx.exit(1) | ||
|
||
if evaluation.project_score.value < config.fail_project_under: | ||
print(f"Error: fail_project_under = {evaluation.project_score.value}") | ||
ctx.exit(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd make sense to have this output done by src/dbt_score/formatters/human_readable_formatter.py
. The Formatter
class doesn't know about the config, but it could easily receive it upon instantiation in __init__
. I'm happy to take care of this if you prefer - let me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@matthieucan I've pushed the other PR fixes but happy for you to fix this. I'm struggling for time this week
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WobblyRobbly Unfortunately it seems I can't push to this PR, despite the "Maintainers are allowed to edit this pull request" option. Is there some branch protection on your side?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@matthieucan - I've invited you as a collaborator ... let me know how that goes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that worked!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice, approved! 🙌 Two small comments, which I will leave up to one of you!
except FileNotFoundError: | ||
logger.error( | ||
"dbt's manifest.json could not be found. If you're in a dbt project, be " | ||
"sure to run 'dbt parse' first, or use the option '--run-dbt-parse'." | ||
) | ||
ctx.exit(2) | ||
|
||
if ( | ||
any(x.value < config.fail_any_model_under for x in evaluation.scores.values()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a big fan of using x
personally, score
would be more readable IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes the line too long, and then spread over multiple lines I think we lose in readability
@WobblyRobbly Thank you so much for your contribution! First external PR that we merged for this project 🚀 🥇 |
Thanks for all your help on this one !! |
This pull request is for Issue #55
Decisions
fail_project_under
andfail_any_model_under
can be specified in thepyproject.toml
or via command line0.0
forfail_project_under
andfail_any_model_under
is setcli.py
and that required theEvaluation
instance to be returned fromlint_dbt_project
Please let me know your thoughts of potential changes
Closes #55