From 2c538a6138bc8961c069ee1b4816ed5636a416e9 Mon Sep 17 00:00:00 2001 From: Joel Labes Date: Sat, 7 Dec 2024 08:04:48 +1300 Subject: [PATCH 1/9] Swap latest callouts for fka versionless inline (#6607) ## What are you changing in this pull request and why? [As discussed in Slack](https://dbt-labs.slack.com/archives/C02NCQ9483C/p1733449411561619?thread_ts=1733149843.050859&cid=C02NCQ9483C) - swap callouts for a more inline explanation of the changes, for blog posts where Versionless isn't the main focus so the rename doesn't matter as much. ## Checklist - [x] I have reviewed the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) so my content adheres to these guidelines. --- .../2021-11-23-how-to-upgrade-dbt-versions.md | 7 ++----- website/blog/2024-04-22-extended-attributes.md | 6 +----- ...4-06-12-putting-your-dag-on-the-internet.md | 18 +++++++----------- 3 files changed, 10 insertions(+), 21 deletions(-) diff --git a/website/blog/2021-11-23-how-to-upgrade-dbt-versions.md b/website/blog/2021-11-23-how-to-upgrade-dbt-versions.md index 6a9889d3033..0b1f1fe26bd 100644 --- a/website/blog/2021-11-23-how-to-upgrade-dbt-versions.md +++ b/website/blog/2021-11-23-how-to-upgrade-dbt-versions.md @@ -12,17 +12,14 @@ date: 2021-11-29 is_featured: true --- -import Latest from '/snippets/_release-stages-from-versionless.md' - - - :::tip February 2024 Update It's been a few years since dbt-core turned 1.0! Since then, we've committed to releasing zero breaking changes whenever possible and it's become much easier to upgrade dbt Core versions. In 2024, we're taking this promise further by: + - Stabilizing interfaces for everyone — adapter maintainers, metadata consumers, and (of course) people writing dbt code everywhere — as discussed in [our November 2023 roadmap update](https://github.com/dbt-labs/dbt-core/blob/main/docs/roadmap/2023-11-dbt-tng.md). -- Introducing **Latest** release track in dbt Cloud. No more manual upgrades and no need for _a second sandbox project_ just to try out new features in development. For more details, refer to [Upgrade Core version in Cloud](/docs/dbt-versions/upgrade-dbt-version-in-cloud). +- Introducing [Release tracks](/docs/dbt-versions/cloud-release-tracks) (formerly known as Versionless) to dbt Cloud. No more manual upgrades and no need for _a second sandbox project_ just to try out new features in development. For more details, refer to [Upgrade Core version in Cloud](/docs/dbt-versions/upgrade-dbt-version-in-cloud). We're leaving the rest of this post as is, so we can all remember how it used to be. Enjoy a stroll down memory lane. diff --git a/website/blog/2024-04-22-extended-attributes.md b/website/blog/2024-04-22-extended-attributes.md index 9013af81d47..57636cc8f6b 100644 --- a/website/blog/2024-04-22-extended-attributes.md +++ b/website/blog/2024-04-22-extended-attributes.md @@ -12,10 +12,6 @@ date: 2024-04-22 is_featured: true --- -import Latest from '/snippets/_release-stages-from-versionless.md' - - - dbt Cloud now includes a suite of new features that enable configuring precise and unique connections to data platforms at the environment and user level. These enable more sophisticated setups, like connecting a project to multiple warehouse accounts, first-class support for [staging environments](/docs/deploy/deploy-environments#staging-environment), and user-level [overrides for specific dbt versions](/docs/dbt-versions/upgrade-dbt-version-in-cloud#override-dbt-version). This gives dbt Cloud developers the features they need to tackle more complex tasks, like Write-Audit-Publish (WAP) workflows and safely testing dbt version upgrades. While you still configure a default connection at the project level and per-developer, you now have tools to get more advanced in a secure way. Soon, dbt Cloud will take this even further allowing multiple connections to be set globally and reused with _global connections_. @@ -84,7 +80,7 @@ All you need to do is configure an environment as staging and enable the **Defer ## Upgrading on a curve -Lastly, let’s consider a more specialized use case. Imagine we have a "tiger team" (consisting of a lone analytics engineer named Dave) tasked with upgrading from dbt version 1.6 to the new **Latest release track**, to take advantage of new features and performance improvements. We want to keep the rest of the data team being productive in dbt 1.6 for the time being, while enabling Dave to upgrade and do his work with Latest (and greatest) dbt. +Lastly, let’s consider a more specialized use case. Imagine we have a "tiger team" (consisting of a lone analytics engineer named Dave) tasked with upgrading from dbt version 1.6 to the new **[Latest release track](/docs/dbt-versions/cloud-release-tracks)**, to take advantage of new features and performance improvements. We want to keep the rest of the data team being productive in dbt 1.6 for the time being, while enabling Dave to upgrade and do his work with Latest (and greatest) dbt. ### Development environment diff --git a/website/blog/2024-06-12-putting-your-dag-on-the-internet.md b/website/blog/2024-06-12-putting-your-dag-on-the-internet.md index a8c3bebb61f..54864916d0e 100644 --- a/website/blog/2024-06-12-putting-your-dag-on-the-internet.md +++ b/website/blog/2024-06-12-putting-your-dag-on-the-internet.md @@ -12,11 +12,7 @@ date: 2024-06-14 is_featured: true --- -import Latest from '/snippets/_release-stages-from-versionless.md' - - - -**New in dbt: allow Snowflake Python models to access the internet** +## New in dbt: allow Snowflake Python models to access the internet With dbt 1.8, dbt released support for Snowflake’s [external access integrations](https://docs.snowflake.com/en/developer-guide/external-network-access/external-network-access-overview) further enabling the use of dbt + AI to enrich your data. This allows querying of external APIs within dbt Python models, a functionality that was required for dbt Cloud customer, [EQT AB](https://eqtgroup.com/). Learn about why they needed it and how they helped build the feature and get it shipped! @@ -49,7 +45,7 @@ This API is open and if it requires an API key, handle it similarly to managing For simplicity’s sake, we will show how to create them using [pre-hooks](/reference/resource-configs/pre-hook-post-hook) in a model configuration yml file: -``` +```yml models: - name: external_access_sample config: @@ -61,7 +57,7 @@ models: Then we can simply use the new external_access_integrations configuration parameter to use our network rule within a Python model (called external_access_sample.py): -``` +```python import snowflake.snowpark as snowpark def model(dbt, session: snowpark.Session): dbt.config( @@ -79,7 +75,7 @@ def model(dbt, session: snowpark.Session): The result is a model with some json I can parse, for example, in a SQL model to extract some information: -``` +```sql {{ config( materialized='incremental', @@ -112,12 +108,12 @@ The result is a model that will keep track of dbt invocations, and the current U This is a very new area to Snowflake and dbt -- something special about SQL and dbt is that it’s very resistant to external entropy. The second we rely on API calls, Python packages and other external dependencies, we open up to a lot more external entropy. APIs will change, break, and your models could fail. -Traditionally dbt is the T in ELT (dbt overview [here](https://docs.getdbt.com/terms/elt)), and this functionality unlocks brand new EL capabilities for which best practices do not yet exist. What’s clear is that EL workloads should be separated from T workloads, perhaps in a different modeling layer. Note also that unless using incremental models, your historical data can easily be deleted. dbt has seen a lot of use cases for this, including this AI example as outlined in this external [engineering blog post](https://klimmy.hashnode.dev/enhancing-your-dbt-project-with-large-language-models). +Traditionally dbt is the T in ELT (dbt overview [here](https://docs.getdbt.com/terms/elt)), and this functionality unlocks brand new EL capabilities for which best practices do not yet exist. What’s clear is that EL workloads should be separated from T workloads, perhaps in a different modeling layer. Note also that unless using incremental models, your historical data can easily be deleted. dbt has seen a lot of use cases for this, including this AI example as outlined in this external [engineering blog post](https://klimmy.hashnode.dev/enhancing-your-dbt-project-with-large-language-models). -**A few words about the power of Commercial Open Source Software** +## A few words about the power of Commercial Open Source Software In order to get this functionality shipped quickly, EQT opened a pull request, Snowflake helped with some problems we had with CI and a member of dbt Labs helped write the tests and merge the code in! -dbt now features this functionality in dbt 1.8+ and the "Latest" release track in dbt Cloud (dbt overview [here](/docs/dbt-versions/cloud-release-tracks)). +dbt now features this functionality in dbt 1.8+ and all [Release tracks](/docs/dbt-versions/cloud-release-tracks) in dbt Cloud. dbt Labs staff and community members would love to chat more about it in the [#db-snowflake](https://getdbt.slack.com/archives/CJN7XRF1B) slack channel. From 1f6099766c0c72beb4c0803bd64099df4fbe85f6 Mon Sep 17 00:00:00 2001 From: Doug Beatty <44704949+dbeatty10@users.noreply.github.com> Date: Fri, 6 Dec 2024 16:52:57 -0700 Subject: [PATCH 2/9] Explain behavior of `config.get` --- website/docs/reference/dbt-jinja-functions/config.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/config.md b/website/docs/reference/dbt-jinja-functions/config.md index 3903c82eef7..10049e1a985 100644 --- a/website/docs/reference/dbt-jinja-functions/config.md +++ b/website/docs/reference/dbt-jinja-functions/config.md @@ -34,13 +34,21 @@ __Args__: The `config.get` function is used to get configurations for a model from the end-user. Configs defined in this way are optional, and a default value can be provided. +There are 3 cases: +1. The configuration variable exists, it is it not `None` +1. The configuration variable exists, it it is `None` +1. The configuration variable does not exist + Example usage: ```sql {% materialization incremental, default -%} -- Example w/ no default. unique_key will be None if the user does not provide this configuration {%- set unique_key = config.get('unique_key') -%} - -- Example w/ default value. Default to 'id' if 'unique_key' not provided + -- Example w/ alternate value. Use alternative of 'id' if 'unique_key' config is provided, but it is None + {%- set unique_key = config.get('unique_key') or 'id' -%} + + -- Example w/ default value. Default to 'id' if the 'unique_key' config does not exist {%- set unique_key = config.get('unique_key', default='id') -%} ... ``` From 0cb8e68ada75e5b50bb85fe85905312989b9d3e9 Mon Sep 17 00:00:00 2001 From: nataliefiann <120089939+nataliefiann@users.noreply.github.com> Date: Sat, 7 Dec 2024 00:15:07 +0000 Subject: [PATCH 3/9] Created new section called Parallel batch execution (#6589) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## What are you changing in this pull request and why? I have created this PR following this Git issue: https://github.com/dbt-labs/docs.getdbt.com/issues/6550 for Parallel batch execution ## Checklist - [ ] I have reviewed the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) so my content adheres to these guidelines. - [ ] The topic I'm writing about is for specific dbt version(s) and I have versioned it according to the [version a whole page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) and/or [version a block of content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content) guidelines. - [ ] I have added checklist item(s) to this list for anything anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." - [ ] The content in this PR requires a dbt release note, so I added one to the [release notes page](https://docs.getdbt.com/docs/dbt-versions/dbt-cloud-release-notes). --- 🚀 Deployment available! Here are the direct links to the updated files: - https://docs-getdbt-com-git-nfiann-rbip-dbt-labs.vercel.app/docs/build/incremental-microbatch - https://docs-getdbt-com-git-nfiann-rbip-dbt-labs.vercel.app/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9 - https://docs-getdbt-com-git-nfiann-rbip-dbt-labs.vercel.app/reference/resource-properties/concurrent_batches --------- Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Co-authored-by: Quigley Malcolm Co-authored-by: Leona B. Campbell <3880403+runleonarun@users.noreply.github.com> --- .../docs/docs/build/incremental-microbatch.md | 140 +++++++++++++++++- .../core-upgrade/06-upgrading-to-v1.9.md | 2 + .../resource-properties/concurrent_batches.md | 90 +++++++++++ website/sidebars.js | 1 + 4 files changed, 225 insertions(+), 8 deletions(-) create mode 100644 website/docs/reference/resource-properties/concurrent_batches.md diff --git a/website/docs/docs/build/incremental-microbatch.md b/website/docs/docs/build/incremental-microbatch.md index 55c7dc92367..fad3ac17a66 100644 --- a/website/docs/docs/build/incremental-microbatch.md +++ b/website/docs/docs/build/incremental-microbatch.md @@ -25,7 +25,8 @@ Incremental models in dbt are a [materialization](/docs/build/materializations) Microbatch is an incremental strategy designed for large time-series datasets: - It relies solely on a time column ([`event_time`](/reference/resource-configs/event-time)) to define time-based ranges for filtering. Set the `event_time` column for your microbatch model and its direct parents (upstream models). Note, this is different to `partition_by`, which groups rows into partitions. - It complements, rather than replaces, existing incremental strategies by focusing on efficiency and simplicity in batch processing. -- Unlike traditional incremental strategies, microbatch doesn't require implementing complex conditional logic for [backfilling](#backfills). +- Unlike traditional incremental strategies, microbatch enables you to [reprocess failed batches](/docs/build/incremental-microbatch#retry), auto-detect [parallel batch execution](#parallel-batch-execution), and eliminate the need to implement complex conditional logic for [backfilling](#backfills). + - Note, microbatch might not be the best strategy for all use cases. Consider other strategies for use cases such as not having a reliable `event_time` column or if you want more control over the incremental logic. Read more in [How `microbatch` compares to other incremental strategies](#how-microbatch-compares-to-other-incremental-strategies). ### How microbatch works @@ -179,12 +180,15 @@ It does not matter whether the table already contains data for that day. Given t Several configurations are relevant to microbatch models, and some are required: -| Config | Type | Description | Default | -|----------|------|---------------|---------| -| [`event_time`](/reference/resource-configs/event-time) | Column (required) | The column indicating "at what time did the row occur." Required for your microbatch model and any direct parents that should be filtered. | N/A | -| [`begin`](/reference/resource-configs/begin) | Date (required) | The "beginning of time" for the microbatch model. This is the starting point for any initial or full-refresh builds. For example, a daily-grain microbatch model run on `2024-10-01` with `begin = '2023-10-01` will process 366 batches (it's a leap year!) plus the batch for "today." | N/A | -| [`batch_size`](/reference/resource-configs/batch-size) | String (required) | The granularity of your batches. Supported values are `hour`, `day`, `month`, and `year` | N/A | -| [`lookback`](/reference/resource-configs/lookback) | Integer (optional) | Process X batches prior to the latest bookmark to capture late-arriving records. | `1` | + +| Config | Description | Default | Type | Required | +|----------|---------------|---------|------|---------| +| [`event_time`](/reference/resource-configs/event-time) | The column indicating "at what time did the row occur." Required for your microbatch model and any direct parents that should be filtered. | N/A | Column | Required | +| `begin` | The "beginning of time" for the microbatch model. This is the starting point for any initial or full-refresh builds. For example, a daily-grain microbatch model run on `2024-10-01` with `begin = '2023-10-01` will process 366 batches (it's a leap year!) plus the batch for "today." | N/A | Date | Required | +| `batch_size` | The granularity of your batches. Supported values are `hour`, `day`, `month`, and `year` | N/A | String | Required | +| `lookback` | Process X batches prior to the latest bookmark to capture late-arriving records. | `1` | Integer | Optional | +| `concurrent_batches` | An override for whether batches run concurrently (at the same time) or sequentially (one after the other). | `None` | Boolean | Optional | + @@ -280,7 +284,127 @@ For now, dbt assumes that all values supplied are in UTC: While we may consider adding support for custom time zones in the future, we also believe that defining these values in UTC makes everyone's lives easier. -## How `microbatch` compares to other incremental strategies? +## Parallel batch execution + +The microbatch strategy offers the benefit of updating a model in smaller, more manageable batches. Depending on your use case, configuring your microbatch models to run in parallel offers faster processing, in comparison to running batches sequentially. + +Parallel batch execution means that multiple batches are processed at the same time, instead of one after the other (sequentially) for faster processing of your microbatch models. + +dbt automatically detects whether a batch can be run in parallel in most cases, which means you don’t need to configure this setting. However, the `concurrent_batches` config is available as an override (not a gate), allowing you to specify whether batches should or shouldn’t be run in parallel in specific cases. + +For example, if you have a microbatch model with 12 batches, you can execute those batches to run in parallel. Specifically they'll run in parallel limited by the number of [available threads](/docs/running-a-dbt-project/using-threads). + +### Prerequisites + +To enable parallel execution, you must: + +- Use a supported adapter: + - Snowflake + - Databricks + - More adapters coming soon! + - We'll be continuing to test and add concurrency support for adapters. This means that some adapters might get concurrency support _after_ the 1.9 initial release. + +- Meet [additional conditions](#how-parallel-batch-execution-works) described in the following section. + +### How parallel batch execution works + +A batch can only run in parallel if all of these conditions are met: + +| Condition | Parallel execution | Sequential execution| +| ---------------| :------------------: | :----------: | +| **Not** the first batch | ✅ | - | +| **Not** the last batch | ✅ | - | +| [Adapter supports](#prerequisites) parallel batches | ✅ | - | + + +After checking for the conditions in the previous table — and if `concurrent_batches` value isn't set, dbt will intelligently auto-detect if the model invokes the [`{{ this }}`](/reference/dbt-jinja-functions/this) Jinja function. If it references `{{ this }}`, the batches will run sequentially since `{{ this }}` represents the database of the current model and referencing the same relation causes conflict. + +Otherwise, if `{{ this }}` isn't detected (and other conditions are met), the batches will run in parallel, which can be overriden when you [set a value for `concurrent_batches`](/reference/resource-properties/concurrent_batches). + +### Parallel or sequential execution + +Choosing between parallel batch execution and sequential processing depends on the specific requirements of your use case. + +- Parallel batch execution is faster but requires logic independent of batch execution order. For example, if you're developing a data pipeline for a system that processes user transactions in batches, each batch is executed in parallel for better performance. However, the logic used to process each transaction shouldn't depend on the order of how batches are executed or completed. +- Sequential processing is slower but essential for calculations like [cumulative metrics](/docs/build/cumulative) in microbatch models. It processes data in the correct order, allowing each step to build on the previous one. + + + +### Configure `concurrent_batches` + +By default, dbt auto-detects whether batches can run in parallel for microbatch models, and this works correctly in most cases. However, you can override dbt's detection by setting the [`concurrent_batches` config](/reference/resource-properties/concurrent_batches) in your `dbt_project.yml` or model `.sql` file to specify parallel or sequential execution, given you meet all the [conditions](#prerequisites): + + + + + + +```yaml +models: + +concurrent_batches: true # value set to true to run batches in parallel +``` + + + + + + + + +```sql +{{ + config( + materialized='incremental', + incremental_strategy='microbatch', + event_time='session_start', + begin='2020-01-01', + batch_size='day + concurrent_batches=true, # value set to true to run batches in parallel + ... + ) +}} + +select ... +``` + + + + +## How microbatch compares to other incremental strategies + +As data warehouses roll out new operations for concurrently replacing/upserting data partitions, we may find that the new operation for the data warehouse is more efficient than what the adapter uses for microbatch. In such instances, we reserve the right the update the default operation for microbatch, so long as it works as intended/documented for models that fit the microbatch paradigm. Most incremental models rely on the end user (you) to explicitly tell dbt what "new" means, in the context of each model, by writing a filter in an `{% if is_incremental() %}` conditional block. You are responsible for crafting this SQL in a way that queries [`{{ this }}`](/reference/dbt-jinja-functions/this) to check when the most recent record was last loaded, with an optional look-back window for late-arriving records. diff --git a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md index 6ade3d5013f..a7d8be0e8a1 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md +++ b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md @@ -49,6 +49,8 @@ Starting in Core 1.9, you can use the new [microbatch strategy](/docs/build/incr - Simplified query design: Write your model query for a single batch of data. dbt will use your `event_time`, `lookback`, and `batch_size` configurations to automatically generate the necessary filters for you, making the process more streamlined and reducing the need for you to manage these details. - Independent batch processing: dbt automatically breaks down the data to load into smaller batches based on the specified `batch_size` and processes each batch independently, improving efficiency and reducing the risk of query timeouts. If some of your batches fail, you can use `dbt retry` to load only the failed batches. - Targeted reprocessing: To load a *specific* batch or batches, you can use the CLI arguments `--event-time-start` and `--event-time-end`. +- [Automatic parallel batch execution](/docs/build/incremental-microbatch#parallel-batch-execution): Process multiple batches at the same time, instead of one after the other (sequentially) for faster processing of your microbatch models. dbt intelligently auto-detects if your batches can run in parallel, while also allowing you to manually override parallel execution with the `concurrent_batches` config. + Currently microbatch is supported on these adapters with more to come: * postgres diff --git a/website/docs/reference/resource-properties/concurrent_batches.md b/website/docs/reference/resource-properties/concurrent_batches.md new file mode 100644 index 00000000000..4d6b2ea0af4 --- /dev/null +++ b/website/docs/reference/resource-properties/concurrent_batches.md @@ -0,0 +1,90 @@ +--- +title: "concurrent_batches" +resource_types: [models] +datatype: model_name +description: "Learn about concurrent_batches in dbt." +--- + +:::note + +Available in dbt Core v1.9+ or the [dbt Cloud "Latest" release tracks](/docs/dbt-versions/cloud-release-tracks). + +::: + + + + + + + +```yaml +models: + +concurrent_batches: true +``` + + + + + + + + + + +```sql +{{ + config( + materialized='incremental', + concurrent_batches=true, + incremental_strategy='microbatch' + ... + ) +}} +select ... +``` + + + + + + +## Definition + +`concurrent_batches` is an override which allows you to decide whether or not you want to run batches in parallel or sequentially (one at a time). + +For more information, refer to [how batch execution works](/docs/build/incremental-microbatch#how-parallel-batch-execution-works). +## Example + +By default, dbt auto-detects whether batches can run in parallel for microbatch models. However, you can override dbt's detection by setting the `concurrent_batches` config to `false` in your `dbt_project.yml` or model `.sql` file to specify parallel or sequential execution, given you meet these conditions: +* You've configured a microbatch incremental strategy. +* You're working with cumulative metrics or any logic that depends on batch order. + +Set `concurrent_batches` config to `false` to ensure batches are processed sequentially. For example: + + + +```yaml +models: + my_project: + cumulative_metrics_model: + +concurrent_batches: false +``` + + + + + +```sql +{{ + config( + materialized='incremental', + incremental_strategy='microbatch' + concurrent_batches=false + ) +}} +select ... + +``` + + + diff --git a/website/sidebars.js b/website/sidebars.js index 5d6e0582765..08494e4c713 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -956,6 +956,7 @@ const sidebarSettings = { "reference/resource-configs/materialized", "reference/resource-configs/on_configuration_change", "reference/resource-configs/sql_header", + "reference/resource-properties/concurrent_batches", ], }, { From 5f069bc339700bb5ca8d8e83d6c593a2b129a361 Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 9 Dec 2024 09:47:09 +0000 Subject: [PATCH 4/9] Update website/docs/reference/dbt-jinja-functions/config.md --- website/docs/reference/dbt-jinja-functions/config.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/config.md b/website/docs/reference/dbt-jinja-functions/config.md index 10049e1a985..7c628ab7b10 100644 --- a/website/docs/reference/dbt-jinja-functions/config.md +++ b/website/docs/reference/dbt-jinja-functions/config.md @@ -35,7 +35,7 @@ __Args__: The `config.get` function is used to get configurations for a model from the end-user. Configs defined in this way are optional, and a default value can be provided. There are 3 cases: -1. The configuration variable exists, it is it not `None` +1. The configuration variable exists, it is not `None` 1. The configuration variable exists, it it is `None` 1. The configuration variable does not exist From 06a6ebc7bfe4a2b4c0f277ce055657afebbb8d0e Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 9 Dec 2024 09:47:14 +0000 Subject: [PATCH 5/9] Update website/docs/reference/dbt-jinja-functions/config.md --- website/docs/reference/dbt-jinja-functions/config.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/config.md b/website/docs/reference/dbt-jinja-functions/config.md index 7c628ab7b10..8083ea2a124 100644 --- a/website/docs/reference/dbt-jinja-functions/config.md +++ b/website/docs/reference/dbt-jinja-functions/config.md @@ -36,7 +36,7 @@ The `config.get` function is used to get configurations for a model from the end There are 3 cases: 1. The configuration variable exists, it is not `None` -1. The configuration variable exists, it it is `None` +1. The configuration variable exists, it is `None` 1. The configuration variable does not exist Example usage: From 4c78b8d6faf37f8ee9272f3fa09b6670a3349972 Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 9 Dec 2024 09:47:43 +0000 Subject: [PATCH 6/9] Update website/docs/reference/dbt-jinja-functions/config.md --- website/docs/reference/dbt-jinja-functions/config.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/config.md b/website/docs/reference/dbt-jinja-functions/config.md index 8083ea2a124..c9d93b80eb1 100644 --- a/website/docs/reference/dbt-jinja-functions/config.md +++ b/website/docs/reference/dbt-jinja-functions/config.md @@ -35,7 +35,7 @@ __Args__: The `config.get` function is used to get configurations for a model from the end-user. Configs defined in this way are optional, and a default value can be provided. There are 3 cases: -1. The configuration variable exists, it is not `None` +1. The configuration variable exists, it isn't `None` 1. The configuration variable exists, it is `None` 1. The configuration variable does not exist From 3df2adc94816a956c4e8d6ae02aacde02594c214 Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 9 Dec 2024 09:48:01 +0000 Subject: [PATCH 7/9] Update website/docs/reference/dbt-jinja-functions/config.md --- website/docs/reference/dbt-jinja-functions/config.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/config.md b/website/docs/reference/dbt-jinja-functions/config.md index c9d93b80eb1..d5d3c550ba7 100644 --- a/website/docs/reference/dbt-jinja-functions/config.md +++ b/website/docs/reference/dbt-jinja-functions/config.md @@ -37,7 +37,7 @@ The `config.get` function is used to get configurations for a model from the end There are 3 cases: 1. The configuration variable exists, it isn't `None` 1. The configuration variable exists, it is `None` -1. The configuration variable does not exist +1. The configuration variable doesn't exist Example usage: ```sql From 65d3786070f82c20f3b88d36e807abe60a648502 Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 9 Dec 2024 09:49:10 +0000 Subject: [PATCH 8/9] Update website/docs/reference/dbt-jinja-functions/config.md --- website/docs/reference/dbt-jinja-functions/config.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/config.md b/website/docs/reference/dbt-jinja-functions/config.md index d5d3c550ba7..44cc8778b3b 100644 --- a/website/docs/reference/dbt-jinja-functions/config.md +++ b/website/docs/reference/dbt-jinja-functions/config.md @@ -35,7 +35,7 @@ __Args__: The `config.get` function is used to get configurations for a model from the end-user. Configs defined in this way are optional, and a default value can be provided. There are 3 cases: -1. The configuration variable exists, it isn't `None` +1. The configuration variable exists, it is not `None` 1. The configuration variable exists, it is `None` 1. The configuration variable doesn't exist From 82ebebf2b37251ca0113d08bf8b271b8d73df696 Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 9 Dec 2024 09:49:21 +0000 Subject: [PATCH 9/9] Update website/docs/reference/dbt-jinja-functions/config.md --- website/docs/reference/dbt-jinja-functions/config.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/config.md b/website/docs/reference/dbt-jinja-functions/config.md index 44cc8778b3b..8083ea2a124 100644 --- a/website/docs/reference/dbt-jinja-functions/config.md +++ b/website/docs/reference/dbt-jinja-functions/config.md @@ -37,7 +37,7 @@ The `config.get` function is used to get configurations for a model from the end There are 3 cases: 1. The configuration variable exists, it is not `None` 1. The configuration variable exists, it is `None` -1. The configuration variable doesn't exist +1. The configuration variable does not exist Example usage: ```sql