Skip to content

Commit

Permalink
Merge branch 'current' into add-hover-details-component
Browse files Browse the repository at this point in the history
  • Loading branch information
mirnawong1 authored Dec 15, 2023
2 parents 9916881 + 093435b commit cac7794
Show file tree
Hide file tree
Showing 8 changed files with 33 additions and 43 deletions.
38 changes: 18 additions & 20 deletions website/docs/docs/build/incremental-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,31 +249,29 @@ The `merge` strategy is available in dbt-postgres and dbt-redshift beginning in

<VersionBlock lastVersion="1.5">


| data platform adapter | default strategy | additional supported strategies |
| :-------------------| ---------------- | -------------------- |
| [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) | `append` | `delete+insert` |
| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | `append` | `delete+insert` |
| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | `merge` | `insert_overwrite` |
| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) | `append` | `merge`, `insert_overwrite` |
| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | `merge` | `append`, `insert_overwrite` |
| [dbt-snowflake](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models) | `merge` | `append`, `delete+insert` |
| [dbt-trino](/reference/resource-configs/trino-configs#incremental) | `append` | `merge`, `delete+insert` |
| data platform adapter | `append` | `merge` | `delete+insert` | `insert_overwrite` |
|-----------------------------------------------------------------------------------------------------|:--------:|:-------:|:---------------:|:------------------:|
| [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) || || |
| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) || || |
| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | || ||
| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) ||| ||
| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) ||| ||
| [dbt-snowflake](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models) |||| |
| [dbt-trino](/reference/resource-configs/trino-configs#incremental) |||| |

</VersionBlock>

<VersionBlock firstVersion="1.6">


| data platform adapter | default strategy | additional supported strategies |
| :----------------- | :----------------| : ---------------------------------- |
| [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) | `append` | `merge` , `delete+insert` |
| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | `append` | `merge`, `delete+insert` |
| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | `merge` | `insert_overwrite` |
| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) | `append` | `merge`, `insert_overwrite` |
| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | `merge` | `append`, `insert_overwrite` |
| [dbt-snowflake](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models) | `merge` | `append`, `delete+insert` |
| [dbt-trino](/reference/resource-configs/trino-configs#incremental) | `append` | `merge`, `delete+insert` |
| data platform adapter | `append` | `merge` | `delete+insert` | `insert_overwrite` |
|-----------------------------------------------------------------------------------------------------|:--------:|:-------:|:---------------:|:------------------:|
| [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) |||| |
| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) |||| |
| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | || ||
| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) ||| ||
| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) ||| ||
| [dbt-snowflake](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models) |||| |
| [dbt-trino](/reference/resource-configs/trino-configs#incremental) |||| |

</VersionBlock>

Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/collaborate/govern/model-contracts.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ When building a model with a defined contract, dbt will do two things differentl
Select the adapter-specific tab for more information on [constraint](/reference/resource-properties/constraints) support across platforms. Constraints fall into three categories based on support and platform enforcement:

- **Supported and enforced** &mdash; The model won't build if it violates the constraint.
- **Supported and not enforced** &mdash; The platform supports specifying the type of constraint, but a model can still build even if building the model violates the constraint. This constraint exists for metadata purposes only. This is common for modern cloud data warehouses and less common for legacy databases.
- **Supported and not enforced** &mdash; The platform supports specifying the type of constraint, but a model can still build even if building the model violates the constraint. This constraint exists for metadata purposes only. This approach is more typical in cloud data warehouses than in transactional databases, where strict rule enforcement is more common.
- **Not supported and not enforced** &mdash; You can't specify the type of constraint for the platform.


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,14 +29,6 @@ Using Databricks workflows to call the dbt Cloud job API can be useful for sever
- [Databricks CLI](https://docs.databricks.com/dev-tools/cli/index.html)
- **Note**: You only need to set up your authentication. Once you have set up your Host and Token and are able to run `databricks workspace ls /Users/<[email protected]>`, you can proceed with the rest of this guide.

## Configure Databricks workflows for dbt Cloud jobs

To use Databricks workflows for running dbt Cloud jobs, you need to perform the following steps:

- [Set up a Databricks secret scope](#set-up-a-databricks-secret-scope)
- [Create a Databricks Python notebook](#create-a-databricks-python-notebook)
- [Configure the workflows to run the dbt Cloud jobs](#configure-the-workflows-to-run-the-dbt-cloud-jobs)

## Set up a Databricks secret scope

1. Retrieve **[User API Token](https://docs.getdbt.com/docs/dbt-cloud-apis/user-tokens#user-api-tokens) **or **[Service Account Token](https://docs.getdbt.com/docs/dbt-cloud-apis/service-tokens#generating-service-account-tokens) **from dbt Cloud
Expand Down
2 changes: 1 addition & 1 deletion website/docs/guides/snowflake-qs.md
Original file line number Diff line number Diff line change
Expand Up @@ -462,7 +462,7 @@ Sources make it possible to name and describe the data loaded into your warehous
5. Execute `dbt run`.
The results of your `dbt run` will be exactly the same as the previous step. Your `stg_cusutomers` and `stg_orders`
The results of your `dbt run` will be exactly the same as the previous step. Your `stg_customers` and `stg_orders`
models will still query from the same raw data source in Snowflake. By using `source`, you can
test and document your raw data and also understand the lineage of your sources.
Expand Down
8 changes: 4 additions & 4 deletions website/docs/reference/resource-configs/databricks-configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,9 @@ When materializing a model as `table`, you may include several optional configs
## Incremental models

dbt-databricks plugin leans heavily on the [`incremental_strategy` config](/docs/build/incremental-models#about-incremental_strategy). This config tells the incremental materialization how to build models in runs beyond their first. It can be set to one of four values:
- **`append`** (default): Insert new records without updating or overwriting any existing data.
- **`append`**: Insert new records without updating or overwriting any existing data.
- **`insert_overwrite`**: If `partition_by` is specified, overwrite partitions in the <Term id="table" /> with new data. If no `partition_by` is specified, overwrite the entire table with new data.
- **`merge`** (Delta and Hudi file format only): Match records based on a `unique_key`, updating old records, and inserting new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.)
- **`merge`** (default; Delta and Hudi file format only): Match records based on a `unique_key`, updating old records, and inserting new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.)
- **`replace_where`** (Delta file format only): Match records based on `incremental_predicates`, replacing all records that match the predicates from the existing table with records matching the predicates from the new data. (If no `incremental_predicates` are specified, all new data is inserted, similar to `append`.)

Each of these strategies has its pros and cons, which we'll discuss below. As with any model config, `incremental_strategy` may be specified in `dbt_project.yml` or within a model file's `config()` block.
Expand All @@ -49,8 +49,6 @@ Each of these strategies has its pros and cons, which we'll discuss below. As wi

Following the `append` strategy, dbt will perform an `insert into` statement with all new data. The appeal of this strategy is that it is straightforward and functional across all platforms, file types, connection methods, and Apache Spark versions. However, this strategy _cannot_ update, overwrite, or delete existing data, so it is likely to insert duplicate records for many data sources.

Specifying `append` as the incremental strategy is optional, since it's the default strategy used when none is specified.

<Tabs
defaultValue="source"
values={[
Expand Down Expand Up @@ -195,6 +193,8 @@ The `merge` incremental strategy requires:

dbt will run an [atomic `merge` statement](https://docs.databricks.com/spark/latest/spark-sql/language-manual/merge-into.html) which looks nearly identical to the default merge behavior on Snowflake and BigQuery. If a `unique_key` is specified (recommended), dbt will update old records with values from new records that match on the key column. If a `unique_key` is not specified, dbt will forgo match criteria and simply insert all new records (similar to `append` strategy).

Specifying `merge` as the incremental strategy is optional since it's the default strategy used when none is specified.

<Tabs
defaultValue="source"
values={[
Expand Down
8 changes: 4 additions & 4 deletions website/docs/reference/resource-configs/postgres-configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,16 @@ In dbt-postgres, the following incremental materialization strategies are suppor

<VersionBlock lastVersion="1.5">

- `append` (default)
- `delete+insert`
- `append` (default when `unique_key` is not defined)
- `delete+insert` (default when `unique_key` is defined)

</VersionBlock>

<VersionBlock firstVersion="1.6">

- `append` (default)
- `append` (default when `unique_key` is not defined)
- `merge`
- `delete+insert`
- `delete+insert` (default when `unique_key` is defined)

</VersionBlock>

Expand Down
10 changes: 5 additions & 5 deletions website/docs/reference/resource-configs/redshift-configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,16 @@ In dbt-redshift, the following incremental materialization strategies are suppor

<VersionBlock lastVersion="1.5">

- `append` (default)
- `delete+insert`
- `append` (default when `unique_key` is not defined)
- `delete+insert` (default when `unique_key` is defined)

</VersionBlock>

<VersionBlock firstVersion="1.6">

- `append` (default)
- `append` (default when `unique_key` is not defined)
- `merge`
- `delete+insert`
- `delete+insert` (default when `unique_key` is defined)

</VersionBlock>

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit cac7794

Please sign in to comment.