-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qmalcolm add event for debugging artifact writing #10936
Conversation
If it isn't apparent, I'm trying to reduce the need for tracking variables to make it easier extract the execution code to a separate private function to make it easier to see what is happening.
Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the contributing guide. |
Previously in `get_execution_status` if a non `DbtRuntimeError` exception was raised, the finally would be entered, but the `status`/`message` would not be set, and thus a `status not defined` exception would get raised on attempting to return. Tangentially, there is another issue where somehow the `node_status` is becoming `None`. In all my playing with `get_execution_status` I found that trying to return an undefined variable in the `finally` caused an undefined variable exception. However, if in some python version, it instead just handed back `None`, then this fix should also solve that.
e1a0ca1
to
0223f3a
Compare
@@ -0,0 +1,6 @@ | |||
kind: Features | |||
body: Emit debug logging event whenever artifacts are written |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "need" is a strong word. We wanted to add this event because we couldn't tell if the writing of the artifact was actually happening when debugging the issue of #10934 in cloud. Having this log could thus reduce debug time in the future if something like this happens again. Additionally, it's possible that throughout execution we end up writing an artifact multiple times, which is likely wasteful. Having a log like this would be helpful in identifying if that is happening. Additionally, the fix for the #10934 isn't guaranteed by the fix in 0223f3a. That fix is fairly narrowly scoped. It is possible if an exception gets raised elsewhere, we might run into a situation again where an artifact is not being written (thus the log is useful).
Tangentially, I just admitted the fix in 0223f3a is pretty limited. This is unfortunately necessary. I tried moving the exception handling up some levels, but quickly you run into a labyrinth of catching too much in instances where failure is expected 🫠
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks good to me!
I believe the adding more events part should belong to a separate PR if that's useful, thoughts?
@@ -93,11 +93,16 @@ def get_execution_status(sql: str, adapter: BaseAdapter) -> Tuple[RunStatus, str | |||
response, _ = adapter.execute(sql, auto_begin=False, fetch=False) | |||
status = RunStatus.Success | |||
message = response._message | |||
except (KeyboardInterrupt, SystemExit): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean if we cancel during get_execution_status no result file will be written?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct. We could re-catch the KeyboardInterrupt / SystemExit
in after_run and do something similar to what is done in execute_nodes.
@@ -645,7 +650,6 @@ def safe_run_hooks( | |||
return RunStatus.Success | |||
|
|||
status = RunStatus.Success | |||
failed = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making this logic more concise.
If I understand it correctly, we want the behavior to be: if a node fails, we want to skip the rest?
I think the direction of status
and failed
is duplicate is spot on, but I am wondering should we get rid of status and rename failed to something like skip_rest_due_to_failure
instead? In the current structure I have to do some guess and validation in order to understand what the code is trying to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is correct, if one hook fails, the rest are expected to be skipped.
I can clean up the code a bit more, I was trying to avoid too much refactoring in this PR as it's generally best practice to separate feature/bug work from refactors. I actually walked back some refactoring work I did in this regard.
To the keen eyed observer, 5e9d3d6 foreshadowed such work. Work that I ultimately dropped before pushing my branch to reduce the number of changes:
If it isn't apparent, I'm trying to reduce the need for tracking variables
to make it easier extract the execution code to a separate private function
to make it easier to see what is happening.
@ChenyuLInx what if I actually split this into 3 PRs? It'll be easier to see what is happening and separates the work streams appropriately
|
Resolves #10934
Problem
Solution
Checklist