-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CT-1376] [Bug] Invalid Number Precision error for incremental model in 1.3 #6107
Comments
Also found this, no idea why this has changed in 1.3? Workaround because I didn't want to rebuild the model was to cast the failing column to This wouldn't have worked anyway... https://docs.snowflake.com/en/sql-reference/data-types-numeric.html#number |
I can confirm the same issue as @kellyldougan. Reverting to version 1.2.2 resolved the issue for now. |
+1 The workaround I've found is to disable the on_schema_change = 'sync_all_columns' and use the default that is ignore. |
@joshuataylor noted here (and affirmed by @rloredo above) that:
|
Hi folks! I've been trying to replicate this, but can't seem to trigger the behavior detailed here. There's a few people who seem to have reported this for 1.3. Any evidence it's also on other versions as of now? Do we have a simplest-possible example that triggers this? Working on this in the background but would love to see your responses to hone in on this faster. |
Hey @VersusFacit the steps to reproduce above give me the expected result. Note that this is only failing in Snowflake (at least all of us here are using SF), and for models that were first created using a 1.2 version. |
I have not yet been able to reproduce this personally yet. I will spin up a repo showing the steps I tried to see if anyone else can reproduce, and I'll share a link in a follow-up comment. Here's where I suspect the issue is arising (which was introduced in dbt-labs/dbt-snowflake#93). p.s. if this is confirmed to only affect Snowflake, we'll transfer this issue to the dbt-snowflake repo. |
Could someone try the following and see if it works? e.g., @kellyldougan, @rloredo or anyone else that was able to reproduce this. It contains two changes, namely:
{{
config(
materialized='incremental',
incremental_strategy='delete+insert',
unique_key='col_1',
pre_hook= "{% if is_incremental() %}
TRUNCATE IF EXISTS {{ this }}
{% endif %}",
on_schema_change='sync_all_columns'
)
}}
with
seed_ref as (
select
to_number(col_1) as col_1,
to_number(col_2) as col_2
from {{ ref('test_seed') }}
)
select
col_1,
round(col_1 / col_2 * 100, 1) as cost_pct,
round(col_1 * (col_1 / col_2), 2) as cost_dollars
from seed_ref |
As noted earlier, I was not able to trigger the Here's the repo and instructions I used to try to reproduce this: Could someone try out the instructions in this repo and let me know if you are able to reproduce it in your own environment or not? |
@dbeatty10 I can't access the link you shared. I can try tomorrow at work. |
@rloredo I accidentally had the repo as private 😅 -- I've made it public now so hopefully you can access it. |
@dbeatty10 I couldn't reproduce it using the steps you shared. Let me explain a little bit what it looks like: When I update to 1.3 and
That fails with the message
So, to reproduce it, I think we need to mimic these 38,6 and 38,2 schema changes. What I don't know is why that query is being triggered if we didn't change any definition in the model... I hope this helps a little bit on how to reproduce it. |
What happens if you add the following macro to your project and then try to re-run using 1.3? {% macro dbt_snowflake_get_tmp_relation_type(strategy, unique_key, language) %}
-- always table
{{ return('table') }}
{% endmacro %} Does it still give that error message? This will help confirm/deny if a new feature for incremental models introduced in 1.3 is affecting your models or not. |
@dbeatty10 I tried this macro. |
Thanks for trying that out -- this helps eliminate one possibility. You see those When the incremental model sees the change from Key questionThe key question for us to figure out is:
What is the expression for your column that is triggering the error? Is it really involving rounding like Out of curiosity, are you using any macros to transform the Within Snowflake, the "scale and precision of the output of an arithmetic operation depends on the scale and precision of the input(s)." Each type of arithmetic operation has different rules. For example, dividing a Examining the data typesIf you look in your
create or replace temporary table DEV_RLOREDO.enriched.enr_reservations__dbt_tmp as
... then: describe table DEV_RLOREDO.enriched.enr_reservations__dbt_tmp
...
describe table DEV_RLOREDO.enriched.enr_reservations and finally: drop view if exists DEV_RLOREDO.enriched.enr_reservations__dbt_tmp cascade Then it without that macro I gave you, it might be slightly different for dbt-core 1.3 (to use a temporary view instead of a temporary table):
create or replace view DEV_RLOREDO.enriched.enr_reservations__dbt_tmp
... then: describe table DEV_RLOREDO.enriched.enr_reservations__dbt_tmp
...
describe table DEV_RLOREDO.enriched.enr_reservations and finally: drop view if exists DEV_RLOREDO.enriched.enr_reservations__dbt_tmp cascade You could copy these sections of your log file and run them in the Snowflake web UI to inspect the data types via the One more questionIs the SQL logic that you see in the logs for the temporary table/view exactly the same when you run in 1.3 vs. 1.2? Or is it slightly different in some way? create or replace temporary table DEV_RLOREDO.enriched.enr_reservations__dbt_tmp as
[SQL logic to examine for differences] |
So... now I have the error in both versions. The original table has number(38,6) while the new (view in 1.3, temporary table in 1.2) with rounding is (38,2). Running a full refresh should create a new table with column type number(38,2), and it does. Solving the issue of syncing. What I can get from this is that the behaviour of dbt sync_all_columns is the expected one, the precision of the type changed and the column should be synced. However, the sync is not accepted in Snowflake, because It could be argued that the command should avoid updating number precision (is a parameter of a type and not a type), and this way we would avoid the problem, BUT we will be failing to preserve the correct precision of the number type. Also, a "sync_all_but_not_number" could be a not-elegant solution... I guess that the fix for the error is full refreshing the model and if that doesn't work, drop the table and "force" the full refresh. Thanks @dbeatty10 for all your time triaging this! |
You're welcome @rloredo! @kellyldougan, @joshuataylor and/or @celine-geelen Eager to troubleshoot this further if there are no changes to the column definitions and you can replicate that:
I put up an example repo with instructions here, but I was unable to replicate any 1.2 -> 1.3 issues. Through discussion and troubleshooting with @rloredo, this appears to be explained by the definition of a column changing rather than a difference between dbt-snowflake 1.2 and 1.3. Namely, Snowflake allows the precision of a But we're open to the possibility that the type of error can have multiple causes -- we just haven't seen any evidence of this yet (and we'd need some!). ExampleIf you have a column in a pizza_slices / people as slices_per_person Assuming But the scale will decrease down to round(pizza_slices / people, 2) as slices_per_person Any increase or decrease of the scale of a number within an incremental model using a config of SolutionsIn that case, you'd have at least two options to solve this:
|
Apologies for the delay on getting back to you. I was able to replicate this on our account using the steps outlined in the first post. Could it be related to weird SF feature flags, region based rollouts or a myriad of other fun things? I was also getting it with latest I'll dig into this over the weekend and record the exact replication issues (if possible). We're on us-east-1. |
Thanks for replicating @joshuataylor 🙏 Could you try using the custom macro described here to override certain behavior in 1.3? I'd be very curious if this does/doesn't work for you.
Yes, it could! (But hopefully not. 🤞) I did do some initial exploration to see if any relevant feature flags jumped out, but came up empty. I didn't dig into any release notes to see if Ideally, we'd be able to modify this example so that I can replicate as well. I've only tested using the Thanks for your help getting to the bottom of this. |
I'm so confused by this, as I was hitting this every time about 2 weeks ago! I was doing the exact steps in the initial ticket and could replicate it 100% of the time (us-east-1 Snowflake). It's not happening now, and I'm so confused why! The only thing I can think of was some quirky bug that was being hit with Snowflake that they maybe fixed? |
🤷 Yeah, definitely a head-scratcher! Obviously believe what you were experiencing, especially since there were several of you reporting this at the same time. My best guess at the moment is that If so, that would explain why it worked again once you went back to dbt-core 1.2, because 1.2 always uses a temporary table rather than a temporary view to determine data types whereas dbt-core 1.3 will preferentially use views when it can (for performance optimization). |
I'm no longer running into this issue, seems like Snowflake pushed a fix? Going to close this for now. |
Sounds good @kellyldougan thanks again for reporting this. We can re-open in the future if needed. |
Is this a new bug in dbt-core?
Current Behavior
I have an incremental model that works in version 1.2 but not in 1.3. The model has a config block of
and when I run the model in 1.3 I get the following error:
SQL compilation error: cannot change column COST_PCT from type NUMBER(38,1) to NUMBER(38,0) because changing the scale of a number is not supported.
Expected Behavior
I'm expecting this model to run incrementally without errors
Steps To Reproduce
Using version 1.2, in Snowflake, create a seed
test_seed
with the following dataand create an incremental model as follows:
rerun the incremental model in version 1.3 and you will get the following error:
Relevant log output
No response
Environment
Which database adapter are you using with dbt?
snowflake
Additional Context
No response
The text was updated successfully, but these errors were encountered: