Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add fail_project_under and fail_any_model_under options #58

Merged
merged 9 commits into from
Jun 20, 2024

Conversation

WobblyRobbly
Copy link
Contributor

@WobblyRobbly WobblyRobbly commented Jun 16, 2024

This pull request is for Issue #55

Decisions

  • fail_project_under and fail_any_model_under can be specified in the pyproject.toml or via command line
  • to avoid making this a breaking change a default value of 0.0 for fail_project_under and fail_any_model_under is set
  • the exit point for failing is in cli.py and that required the Evaluation instance to be returned from lint_dbt_project

Please let me know your thoughts of potential changes

Closes #55

Copy link
Contributor

@druzhinin-kirill druzhinin-kirill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution!
Left some tiny comments, already looks good 🚀

tests/test_cli.py Outdated Show resolved Hide resolved
src/dbt_score/evaluation.py Outdated Show resolved Hide resolved
src/dbt_score/cli.py Outdated Show resolved Hide resolved
src/dbt_score/evaluation.py Outdated Show resolved Hide resolved
src/dbt_score/config.py Outdated Show resolved Hide resolved
Copy link
Contributor

@jochemvandooren jochemvandooren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super nice feature 🚀, works very well. I've added some nitpicky comments, just to align with the style we have 😅

Looks good to me! Just missing an entry in the CHANGELOG, could you add it? 🙇‍♂️

docs/configuration.md Outdated Show resolved Hide resolved
docs/configuration.md Outdated Show resolved Hide resolved
src/dbt_score/cli.py Outdated Show resolved Hide resolved
src/dbt_score/cli.py Outdated Show resolved Hide resolved
Copy link
Contributor

@druzhinin-kirill druzhinin-kirill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pre-approve from my side 🚀

@WobblyRobbly
Copy link
Contributor Author

@jochemvandooren as we've added a Breaking Change i.e. will fail if the default model score isn't met (5.0) this should bump the version to 1.0.0 based on ?

## [1.0.0] - 2024-06-19

- Add `project_fail_under` configuration 
- Add `fail_any_model_under` configuration
- **breaking:** - default values of `5.0` for `project_fail_under` and `fail_any_model_under` will cause command to exit return code 1

@matthieucan
Copy link
Contributor

@jochemvandooren as we've added a Breaking Change i.e. will fail if the default model score isn't met (5.0) this should bump the version to 1.0.0 based on ?

## [1.0.0] - 2024-06-19

- Add `project_fail_under` configuration 
- Add `fail_any_model_under` configuration
- **breaking:** - default values of `5.0` for `project_fail_under` and `fail_any_model_under` will cause command to exit return code 1

We're following semantic versioning, meaning if the major version is 0, a breaking change will bump the minor version.
However no need for this to happen in the changelog file for this PR, this is done when releasing :)

Comment on lines 134 to 148
failed_models = {
model.unique_id: score.value
for model, score in evaluation.scores.items()
if (score.value < config.fail_any_model_under)
}

if failed_models:
print(f"Error: fail_any_model_under = {config.fail_any_model_under}")
for model, value in failed_models.items():
print(f"Model {model} scored {round(value,1)}")
ctx.exit(1)

if evaluation.project_score.value < config.fail_project_under:
print(f"Error: fail_project_under = {evaluation.project_score.value}")
ctx.exit(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd make sense to have this output done by src/dbt_score/formatters/human_readable_formatter.py. The Formatter class doesn't know about the config, but it could easily receive it upon instantiation in __init__. I'm happy to take care of this if you prefer - let me know.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@matthieucan I've pushed the other PR fixes but happy for you to fix this. I'm struggling for time this week

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WobblyRobbly Unfortunately it seems I can't push to this PR, despite the "Maintainers are allowed to edit this pull request" option. Is there some branch protection on your side?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@matthieucan - I've invited you as a collaborator ... let me know how that goes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that worked!

Copy link
Contributor

@jochemvandooren jochemvandooren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice, approved! 🙌 Two small comments, which I will leave up to one of you!

except FileNotFoundError:
logger.error(
"dbt's manifest.json could not be found. If you're in a dbt project, be "
"sure to run 'dbt parse' first, or use the option '--run-dbt-parse'."
)
ctx.exit(2)

if (
any(x.value < config.fail_any_model_under for x in evaluation.scores.values())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a big fan of using x personally, score would be more readable IMO

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes the line too long, and then spread over multiple lines I think we lose in readability

docs/configuration.md Outdated Show resolved Hide resolved
@matthieucan matthieucan enabled auto-merge (squash) June 20, 2024 12:41
@matthieucan matthieucan merged commit b95d691 into PicnicSupermarket:master Jun 20, 2024
3 checks passed
@matthieucan
Copy link
Contributor

@WobblyRobbly Thank you so much for your contribution! First external PR that we merged for this project 🚀 🥇
Looking forward to more of the same!

@WobblyRobbly
Copy link
Contributor Author

Thanks for all your help on this one !!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Configurable acceptable project score
4 participants