diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md
index b546f258f6c..432ed97635b 100644
--- a/website/blog/2023-12-20-partner-integration-guide.md
+++ b/website/blog/2023-12-20-partner-integration-guide.md
@@ -20,7 +20,7 @@ This guide doesn't include how to integrate with dbt Core. If you’re intereste
Instead, we're going to focus on integrating with dbt Cloud. Integrating with dbt Cloud is a key requirement to become a dbt Labs technology partner, opening the door to a variety of collaborative commercial opportunities.
Here I'll cover how to get started, potential use cases you want to solve for, and points of integrations to do so.
-
+
## New to dbt Cloud?
If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](https://docs.getdbt.com/guides) after reading [What is dbt](https://docs.getdbt.com/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration.
diff --git a/website/blog/2024-01-09-defer-in-development.md b/website/blog/2024-01-09-defer-in-development.md
new file mode 100644
index 00000000000..634fd1100c9
--- /dev/null
+++ b/website/blog/2024-01-09-defer-in-development.md
@@ -0,0 +1,160 @@
+---
+title: "More time coding, less time waiting: Mastering defer in dbt"
+description: "Learn how to take advantage of the defer to prod feature in dbt Cloud"
+slug: defer-to-prod
+
+authors: [dave_connors]
+
+tags: [analytics craft]
+hide_table_of_contents: false
+
+date: 2024-01-09
+is_featured: true
+---
+
+Picture this — you’ve got a massive dbt project, thousands of models chugging along, creating actionable insights for your stakeholders. A ticket comes your way — a model needs to be refactored! "No problem," you think to yourself, "I will simply make that change and test it locally!" You look at you lineage, and realize this model is many layers deep, buried underneath a long chain of tables and views.
+
+“OK,” you think further, “I’ll just run a `dbt build -s +my_changed_model` to make sure I have everything I need built into my dev schema and I can test my changes”. You run the command. You wait. You wait some more. You get some coffee, and completely take yourself out of your dbt development flow state. A lot of time and money down the drain to get to a point where you can *start* your work. That’s no good!
+
+Luckily, dbt’s defer functionality allow you to *only* build what you care about when you need it, and nothing more. This feature helps developers spend less time and money in development, helping ship trusted data products faster. dbt Cloud offers native support for this workflow in development, so you can start deferring without any additional overhead!
+
+## Defer to prod or prefer to slog
+
+A lot of dbt’s magic relies on the elegance and simplicity of the `{{ ref() }}` function, which is how you can build your lineage graph, and how dbt can be run in different environments — the `{{ ref() }}` functions dynamically compile depending on your environment settings, so that you can run your project in development and production without changing any code.
+
+Here's how a simple `{{ ref() }}` would compile in different environments:
+
+
+
+
+
+ ```sql
+ -- in models/my_model.sql
+ select * from {{ ref('model_a') }}
+ ```
+
+
+
+
+ ```sql
+ -- in target/compiled/models/my_model.sql
+ select * from analytics.dbt_dconnors.model_a
+ ```
+
+
+
+
+ ```sql
+ -- in target/compiled/models/my_model.sql
+ select * from analytics.analytics.model_a
+ ```
+
+
+
+
+All of that is made possible by the dbt `manifest.json`, [the artifact](https://docs.getdbt.com/reference/artifacts/manifest-json) that is produced each time you run a dbt command, containing the comprehensive and encyclopedic compendium of all things in your project. Each node is assigned a `unique_id` (like `model.my_project.my_model` ) and the manifest stores all the metadata about that model in a dictionary associated to that id. This includes the data warehouse location that gets returned when you write `{{ ref('my_model') }}` in SQL. Different runs of your project in different environments result in different metadata written to the manifest.
+
+Let’s think back to the hypothetical above — what if we made use of the production metadata to read in data from production, so that I don’t have to rebuild *everything* upstream of the model I’m changing? That’s exactly what `defer` does! When you supply dbt with a production version of the `manifest.json` artifact, and pass the `--defer` flag to your dbt command, dbt will resolve the `{{ ref() }}` functions for any resource upstream of your selected models with the *production metadata* — no need to rebuild anything you don’t have to!
+
+Let’s take a look at a simplified example — let’s say your project looks like this in production:
+
+
+
+And you’re tasked with making changes to `model_f`. Without defer, you would need to make sure to at minimum execute a `dbt run -s +model_f` to ensure all the upstream dependencies of `model_f` are present in your development schema so that you can start to run `model_f`.* You just spent a whole bunch of time and money duplicating your models, and now your warehouse looks like this:
+
+
+
+With defer, we should not build anything other than the models that have changed, and are now different from their production counterparts! Let’s tell dbt to use production metadata to resolve our refs, and only build the model I have changed — that command would be `dbt run -s model_f --defer` .**
+
+
+
+This results in a *much slimmer build* — we read data in directly from the production version of `model_b` and `model_c`, and don’t have to worry about building anything other than what we selected!
+
+\* [Another option](https://docs.getdbt.com/reference/commands/clone) is to run `dbt clone -s +model_f` , which will make clones of your production models into your development schema, making use of zero copy cloning where available. Check out this [great dev blog](https://docs.getdbt.com/blog/to-defer-or-to-clone) from Doug and Kshitij on when to use `clone` vs `defer`!
+
+** in dbt Core, you also have to tell dbt where to find the production artifacts! Otherwise it doesn’t know what to defer to. You can either use the `--state path/to/artifact/folder` option, or set a `DBT_STATE` environment variable.
+
+### Batteries included deferral in dbt Cloud
+
+dbt Cloud offers a seamless deferral experience in both the dbt Cloud IDE and the dbt Cloud CLI — dbt Cloud ***always*** has the latest run artifacts from your production environment. Rather than having to go through the painful process of somehow getting a copy of your latest production `manifest.json` into your local filesystem to defer to, and building a pipeline to always keep it fresh, dbt Cloud does all that work for you. When developing in dbt Cloud, the latest artifact is automatically provided to you under the hood, and dbt Cloud handles the `--defer` flag for you when you run commands in “defer mode”. dbt Cloud will use the artifacts from the deployment environment in your project marked as `Production` in the [environments settings](https://docs.getdbt.com/docs/deploy/deploy-environments#set-as-production-environment) in both the IDE and the Cloud CLI. Be sure to configure a production environment to unlock this feature!
+
+In the dbt Cloud IDE, there’s as simple toggle switch labeled `Defer to production`. Simply enabling this toggle will defer your command to the production environment when you run any dbt command in the IDE!
+
+
+
+The cloud CLI has this setting *on by default* — there’s nothing else you need to do to set this up! If you prefer not to defer, you can pass the `--no-defer` flag to override this behavior. You can also set an environment other than your production environment as the deferred to environment in your `dbt-cloud` settings in your `dbt_project.yml` :
+
+```yaml
+dbt-cloud:
+ project-id:
+ defer-env-id:
+```
+
+When you’re developing with dbt Cloud, you can defer right away, and completely avoid unnecessary model builds in development!
+
+### Other things to to know about defer
+
+**Favoring state**
+
+One of the major gotchas in the defer workflow is that when you’re in defer mode, dbt assumes that all the objects in your development schema are part of your current work stream, and will prioritize those objects over the production objects when possible.
+
+Let’s take a look at that example above again, and pretend that some time before we went to make this edit, we did some work on `model_c`, and we have a local copy of `model_c` hanging out in our development schema:
+
+
+
+When you run `dbt run -s model_f --defer` , dbt will detect the development copy of `model_c` and say “Hey, y’know, I bet Dave is working on that model too, and he probably wants to make sure his changes to `model_c` work together with his changes to `model_f` . Because I am a kind and benevolent data transformation tool, i’ll make sure his `{{ ref('model_c') }]` function compiles to his development changes!” Thanks dbt!
+
+As a result, we’ll effectively see this behavior when we run our command:
+
+
+
+Where our code would compile from
+
+```sql
+# in models/model_f.sql
+with
+
+model_b as (
+ select * from {{ ref('model_b') }}
+),
+
+model_c as (
+ select * from {{ ref('model_c') }}
+),
+
+...
+```
+
+to
+
+```sql
+# in target/compiled/models/model_f.sql
+with
+
+model_b as (
+ select * from analytics.analytics.model_b
+),
+
+model_c as (
+ select * from analytics.dbt_dconnors.model_b
+),
+
+...
+```
+
+A mix of prod and dev models may not be what we want! To avoid this, we have a couple options:
+
+1. **Start fresh every time:** The simplest way to avoid this issue is to make sure you are always drop your development schema at the start of a new development session. That way, the only things that show up in your development schema are the things you intentionally selected with your commands!
+2. **Favor state:** Passing the `--favor-state` flag to your command tells dbt “Hey benevolent tool, go ahead and use what you find in the production manifest no matter what you find in my development schema” so that both `{{ ref() }}` functions in the example above point to the production schema, even if `model_c` was hanging around in there.
+
+In this example, `model_c` is a relic of a previous development cycle, but I should be clear here that defaulting to using dev relations is *usually the right course of action* — generally, a dbt PR spans a few models, and you want to coordinate your changes across those models together. This behavior can just get a bit confusing if you’re encountering it for the first time!
+
+**When should I *not* defer to prod**
+
+While defer is a faster and cheaper option for most folks in most situations, defer to prod does not support all projects. The most common reason you should not use defer is regulatory — defer to prod makes the assumption that data is shared between your production and development environments, so reading between these environments is not an issue. For some organizations, like healthcare companies, have restrictions around the data access and sharing that precludes the basic defer structure presented here.
+
+### Call me Willem Defer
+
+
+
+Defer to prod is a powerful way to improve your development velocity with dbt, and dbt Cloud makes it easier than ever to make use of this feature! You too could look this cool while you’re saving time and money developing on your dbt projects!
diff --git a/website/docs/best-practices/how-we-mesh/mesh-4-faqs.md b/website/docs/best-practices/how-we-mesh/mesh-4-faqs.md
index 7119a3d90bd..2b11c3563eb 100644
--- a/website/docs/best-practices/how-we-mesh/mesh-4-faqs.md
+++ b/website/docs/best-practices/how-we-mesh/mesh-4-faqs.md
@@ -44,9 +44,9 @@ You can use model versions to:
A [model access modifier](/docs/collaborate/govern/model-access) in dbt determines if a model is accessible as an input to other dbt models and projects. It specifies where a model can be referenced using [the `ref` function](/reference/dbt-jinja-functions/ref). There are three types of access modifiers:
-1. **Private:** A model with a private access modifier is only referenceable by models within the same group. This is intended for models that are implementation details and are meant to be used only within a specific group of related models.
-2. **Protected:** Models with a protected access modifier can be referenced by any other model within the same dbt project or when the project is installed as a package. This is the default setting for all models, ensuring backward compatibility, especially when groups are assigned to an existing set of models.
-3. **Public:** A public model can be referenced across different groups, packages, or projects. This is suitable for stable and mature models that serve as interfaces for other teams or projects.
+* **Private:** A model with a private access modifier is only referenceable by models within the same group. This is intended for models that are implementation details and are meant to be used only within a specific group of related models.
+* **Protected:** Models with a protected access modifier can be referenced by any other model within the same dbt project or when the project is installed as a package. This is the default setting for all models, ensuring backward compatibility, especially when groups are assigned to an existing set of models.
+* **Public:** A public model can be referenced across different groups, packages, or projects. This is suitable for stable and mature models that serve as interfaces for other teams or projects.
@@ -208,12 +208,12 @@ First things first: access to underlying data is always defined and enforced by
[Model access](/docs/collaborate/govern/model-access) defines where models can be referenced. It also informs the discoverability of those projects within dbt Explorer. Model `access` is defined in code, just like any other model configuration (`materialized`, `tags`, etc).
-**Public:** Models with `public` access can be referenced everywhere. These are the “data products” of your organization.
+* **Public:** Models with `public` access can be referenced everywhere. These are the “data products” of your organization.
-**Protected:** Models with `protected` access can only be referenced within the same project. This is the default level of model access.
+* **Protected:** Models with `protected` access can only be referenced within the same project. This is the default level of model access.
We are discussing a future extension to `protected` models to allow for their reference in _specific_ downstream projects. Please read [the GitHub issue](https://github.com/dbt-labs/dbt-core/issues/9340), and upvote/comment if you’re interested in this use case.
-**Private:** Model `groups` enable more-granular control over where `private` models can be referenced. By defining a group, and configuring models to belong to that group, you can restrict other models (not in the same group) from referencing any `private` models the group contains. Groups also provide a standard mechanism for defining the `owner` of all resources it contains.
+* **Private:** Model `groups` enable more-granular control over where `private` models can be referenced. By defining a group, and configuring models to belong to that group, you can restrict other models (not in the same group) from referencing any `private` models the group contains. Groups also provide a standard mechanism for defining the `owner` of all resources it contains.
Within dbt Explorer, `public` models are discoverable for every user in the dbt Cloud account — every public model is listed in the “multi-project” view. By contrast, `protected` and `private` models in a project are visible only to users who have access to that project (including read-only access).
diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md
index cc45290ae15..9f1c206f5fb 100644
--- a/website/docs/docs/build/incremental-models.md
+++ b/website/docs/docs/build/incremental-models.md
@@ -236,7 +236,7 @@ Instead, whenever the logic of your incremental changes, execute a full-refresh
## About `incremental_strategy`
-There are various ways (strategies) to implement the concept of an incremental materializations. The value of each strategy depends on:
+There are various ways (strategies) to implement the concept of incremental materializations. The value of each strategy depends on:
* the volume of data,
* the reliability of your `unique_key`, and
@@ -450,5 +450,129 @@ The syntax depends on how you configure your `incremental_strategy`:
+### Built-in strategies
+
+Before diving into [custom strategies](#custom-strategies), it's important to understand the built-in incremental strategies in dbt and their corresponding macros:
+
+| `incremental_strategy` | Corresponding macro |
+|------------------------|----------------------------------------|
+| `append` | `get_incremental_append_sql` |
+| `delete+insert` | `get_incremental_delete_insert_sql` |
+| `merge` | `get_incremental_merge_sql` |
+| `insert_overwrite` | `get_incremental_insert_overwrite_sql` |
+
+
+For example, a built-in strategy for the `append` can be defined and used with the following files:
+
+
+
+```sql
+{% macro get_incremental_append_sql(arg_dict) %}
+
+ {% do return(some_custom_macro_with_sql(arg_dict["target_relation"], arg_dict["temp_relation"], arg_dict["unique_key"], arg_dict["dest_columns"], arg_dict["incremental_predicates"])) %}
+
+{% endmacro %}
+
+
+{% macro some_custom_macro_with_sql(target_relation, temp_relation, unique_key, dest_columns, incremental_predicates) %}
+
+ {%- set dest_cols_csv = get_quoted_csv(dest_columns | map(attribute="name")) -%}
+
+ insert into {{ target_relation }} ({{ dest_cols_csv }})
+ (
+ select {{ dest_cols_csv }}
+ from {{ temp_relation }}
+ )
+
+{% endmacro %}
+```
+
+
+Define a model models/my_model.sql:
+
+```sql
+{{ config(
+ materialized="incremental",
+ incremental_strategy="append",
+) }}
+
+select * from {{ ref("some_model") }}
+```
+
+### Custom strategies
+
+
+
+Custom incremental strategies can be defined beginning in dbt v1.2.
+
+
+
+
+
+As an easier alternative to [creating an entirely new materialization](/guides/create-new-materializations), users can define and use their own "custom" user-defined incremental strategies by:
+
+1. defining a macro named `get_incremental_STRATEGY_sql`. Note that `STRATEGY` is a placeholder and you should replace it with the name of your custom incremental strategy.
+2. configuring `incremental_strategy: STRATEGY` within an incremental model
+
+dbt won't validate user-defined strategies, it will just look for the macro by that name, and raise an error if it can't find one.
+
+For example, a user-defined strategy named `insert_only` can be defined and used with the following files:
+
+
+
+```sql
+{% macro get_incremental_insert_only_sql(arg_dict) %}
+
+ {% do return(some_custom_macro_with_sql(arg_dict["target_relation"], arg_dict["temp_relation"], arg_dict["unique_key"], arg_dict["dest_columns"], arg_dict["incremental_predicates"])) %}
+
+{% endmacro %}
+
+
+{% macro some_custom_macro_with_sql(target_relation, temp_relation, unique_key, dest_columns, incremental_predicates) %}
+
+ {%- set dest_cols_csv = get_quoted_csv(dest_columns | map(attribute="name")) -%}
+
+ insert into {{ target_relation }} ({{ dest_cols_csv }})
+ (
+ select {{ dest_cols_csv }}
+ from {{ temp_relation }}
+ )
+
+{% endmacro %}
+```
+
+
+
+
+
+```sql
+{{ config(
+ materialized="incremental",
+ incremental_strategy="insert_only",
+ ...
+) }}
+
+...
+```
+
+
+
+### Custom strategies from a package
+
+To use the `merge_null_safe` custom incremental strategy from the `example` package:
+- [Install the package](/docs/build/packages#how-do-i-add-a-package-to-my-project)
+- Then add the following macro to your project:
+
+
+
+```sql
+{% macro get_incremental_merge_null_safe_sql(arg_dict) %}
+ {% do return(example.get_incremental_merge_null_safe_sql(arg_dict)) %}
+{% endmacro %}
+```
+
+
+
+
diff --git a/website/docs/docs/build/saved-queries.md b/website/docs/docs/build/saved-queries.md
index 7b88a052726..2ad16b86f0d 100644
--- a/website/docs/docs/build/saved-queries.md
+++ b/website/docs/docs/build/saved-queries.md
@@ -20,17 +20,17 @@ The following is an example of a saved query:
```yaml
saved_queries:
- name: p0_booking
- description: Booking-related metrics that are of the highest priority.
- query_params:
- metrics:
- - bookings
- - instant_bookings
- group_by:
- - TimeDimension('metric_time', 'day')
- - Dimension('listing__capacity_latest')
- where:
- - "{{ Dimension('listing__capacity_latest') }} > 3"
+ - name: p0_booking
+ description: Booking-related metrics that are of the highest priority.
+ query_params:
+ metrics:
+ - bookings
+ - instant_bookings
+ group_by:
+ - TimeDimension('metric_time', 'day')
+ - Dimension('listing__capacity_latest')
+ where:
+ - "{{ Dimension('listing__capacity_latest') }} > 3"
```
### FAQs
diff --git a/website/docs/docs/community-adapters.md b/website/docs/docs/community-adapters.md
index d1e63f03128..1faf2fd9e25 100644
--- a/website/docs/docs/community-adapters.md
+++ b/website/docs/docs/community-adapters.md
@@ -17,4 +17,4 @@ Community adapters are adapter plugins contributed and maintained by members of
| [TiDB](/docs/core/connect-data-platform/tidb-setup) | [Firebolt](/docs/core/connect-data-platform/firebolt-setup) | [MindsDB](/docs/core/connect-data-platform/mindsdb-setup)
| [Vertica](/docs/core/connect-data-platform/vertica-setup) | [AWS Glue](/docs/core/connect-data-platform/glue-setup) | [MySQL](/docs/core/connect-data-platform/mysql-setup) |
| [Upsolver](/docs/core/connect-data-platform/upsolver-setup) | [Databend Cloud](/docs/core/connect-data-platform/databend-setup) | [fal - Python models](/docs/core/connect-data-platform/fal-setup) |
-| [TimescaleDB](https://dbt-timescaledb.debruyn.dev/) | | |
+| [TimescaleDB](https://dbt-timescaledb.debruyn.dev/) | [Extrica](/docs/core/connect-data-platform/extrica-setup) | |
diff --git a/website/docs/docs/core/connect-data-platform/extrica-setup.md b/website/docs/docs/core/connect-data-platform/extrica-setup.md
new file mode 100644
index 00000000000..8125e6e3749
--- /dev/null
+++ b/website/docs/docs/core/connect-data-platform/extrica-setup.md
@@ -0,0 +1,80 @@
+---
+title: "Extrica Setup"
+description: "Read this guide to learn about the Extrica Trino Query Engine setup in dbt."
+id: "extrica-setup"
+meta:
+ maintained_by: Extrica, Trianz
+ authors: Gaurav Mittal, Viney Kumar, Mohammed Feroz, and Mrinal Mayank
+ github_repo: 'extricatrianz/dbt-extrica'
+ pypi_package: 'dbt-extrica'
+ min_core_version: 'v1.7.2'
+ cloud_support: 'Not Supported'
+ min_supported_version: 'n/a'
+ platform_name: 'Extrica'
+---
+ Overview of {frontMatter.meta.pypi_package}
+
+
+ - Maintained by: {frontMatter.meta.maintained_by}
+ - Authors: {frontMatter.meta.authors}
+ - GitHub repo: {frontMatter.meta.github_repo}
+ - PyPI package:
{frontMatter.meta.pypi_package}
+ - Supported dbt Core version: {frontMatter.meta.min_core_version} and newer
+ - dbt Cloud support: {frontMatter.meta.cloud_support}
+ - Minimum data platform version: {frontMatter.meta.min_supported_version}
+
+ Installing {frontMatter.meta.pypi_package}
+
+Use `pip` to install the adapter, which automatically installs `dbt-core` and any additional dependencies. Use the following command for installation:
+
+python -m pip install {frontMatter.meta.pypi_package}
+
+
+ Connecting to {frontMatter.meta.platform_name}
+
+#### Example profiles.yml
+Here is an example of dbt-extrica profiles. At a minimum, you need to specify `type`, `method`, `username`, `password` `host`, `port`, `schema`, `catalog` and `threads`.
+
+
+```yaml
+:
+ outputs:
+ dev:
+ type: extrica
+ method: jwt
+ username: [username for jwt auth]
+ password: [password for jwt auth]
+ host: [extrica hostname]
+ port: [port number]
+ schema: [dev_schema]
+ catalog: [catalog_name]
+ threads: [1 or more]
+
+ prod:
+ type: extrica
+ method: jwt
+ username: [username for jwt auth]
+ password: [password for jwt auth]
+ host: [extrica hostname]
+ port: [port number]
+ schema: [dev_schema]
+ catalog: [catalog_name]
+ threads: [1 or more]
+ target: dev
+
+```
+
+
+#### Description of Extrica Profile Fields
+
+| Parameter | Type | Description |
+|------------|----------|------------------------------------------|
+| type | string | Specifies the type of dbt adapter (Extrica). |
+| method | jwt | Authentication method for JWT authentication. |
+| username | string | Username for JWT authentication. The obtained JWT token is used to initialize a trino.auth.JWTAuthentication object. |
+| password | string | Password for JWT authentication. The obtained JWT token is used to initialize a trino.auth.JWTAuthentication object. |
+| host | string | The host parameter specifies the hostname or IP address of the Extrica's Trino server. |
+| port | integer | The port parameter specifies the port number on which the Extrica's Trino server is listening. |
+| schema | string | Schema or database name for the connection. |
+| catalog | string | Name of the catalog representing the data source. |
+| threads | integer | Number of threads for parallel execution of queries. (1 or more) |
diff --git a/website/docs/guides/manual-install-qs.md b/website/docs/guides/manual-install-qs.md
index e9c1af259ac..fcd1e5e9599 100644
--- a/website/docs/guides/manual-install-qs.md
+++ b/website/docs/guides/manual-install-qs.md
@@ -70,7 +70,7 @@ $ pwd
-6. Update the following values in the `dbt_project.yml` file:
+6. dbt provides the following values in the `dbt_project.yml` file:
@@ -92,7 +92,7 @@ models:
## Connect to BigQuery
-When developing locally, dbt connects to your using a [profile](/docs/core/connect-data-platform/connection-profiles), which is a YAML file with all the connection details to your warehouse.
+When developing locally, dbt connects to your using a [profile](/docs/core/connect-data-platform/connection-profiles), which is a YAML file with all the connection details to your warehouse.
1. Create a file in the `~/.dbt/` directory named `profiles.yml`.
2. Move your BigQuery keyfile into this directory.
diff --git a/website/docs/reference/dbt-jinja-functions/target.md b/website/docs/reference/dbt-jinja-functions/target.md
index e7d08db592f..968f64d0f8d 100644
--- a/website/docs/reference/dbt-jinja-functions/target.md
+++ b/website/docs/reference/dbt-jinja-functions/target.md
@@ -1,20 +1,18 @@
---
-title: "About target variable"
+title: "About target variables"
sidebar_label: "target"
id: "target"
-description: "Contains information about your connection to the warehouse."
+description: "The `target` variable contains information about your connection to the warehouse."
---
-`target` contains information about your connection to the warehouse.
+The `target` variable contains information about your connection to the warehouse.
-* **dbt Core:** These values are based on the target defined in your [`profiles.yml` file](/docs/core/connect-data-platform/profiles.yml)
-* **dbt Cloud Scheduler:**
- * `target.name` is defined per job as described [here](/docs/build/custom-target-names).
- * For all other attributes, the values are defined by the deployment connection. To check these values, click **Deploy** from the upper left and select **Environments**. Then, select the relevant deployment environment, and click **Settings**.
-* **dbt Cloud IDE:** The values are defined by your connection and credentials. To check any of these values, head to your account (via your profile image in the top right hand corner), and select the project under "Credentials".
+- **dbt Core:** These values are based on the target defined in your [profiles.yml](/docs/core/connect-data-platform/profiles.yml) file. Please note that for certain adapters, additional configuration steps may be required. Refer to the [set up page](/docs/core/connect-data-platform/about-core-connections) for your data platform.
+- **dbt Cloud** To learn more about setting up your adapter in dbt Cloud, refer to [About data platform connections](/docs/cloud/connect-data-platform/about-connections).
+ - **[dbt Cloud Scheduler](/docs/deploy/job-scheduler)**: `target.name` is defined per job as described in [Custom target names](/docs/build/custom-target-names). For other attributes, values are defined by the deployment connection. To check these values, click **Deploy** and select **Environments**. Then, select the relevant deployment environment, and click **Settings**.
+ - **[dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud)**: These values are defined by your connection and credentials. To edit these values, click the gear icon in the top right, select **Profile settings**, and click **Credentials**. Select and edit a project to set up the credentials and target name.
-
-Some configs are shared between all adapters, while others are adapter-specific.
+Some configurations are shared between all adapters, while others are adapter-specific.
## Common
| Variable | Example | Description |
@@ -54,6 +52,7 @@ Some configs are shared between all adapters, while others are adapter-specific.
| `target.dataset` | dbt_alice | The dataset the active profile |
## Examples
+
### Use `target.name` to limit data in dev
As long as you use sensible target names, you can perform conditional logic to limit data when working in dev.
@@ -68,6 +67,7 @@ where created_at >= dateadd('day', -3, current_date)
```
### Use `target.name` to change your source database
+
If you have specific Snowflake databases configured for your dev/qa/prod environments,
you can set up your sources to compile to different databases depending on your
environment.
diff --git a/website/sidebars.js b/website/sidebars.js
index 27bcd1147a3..89b1e005a8c 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -212,6 +212,7 @@ const sidebarSettings = {
"docs/core/connect-data-platform/decodable-setup",
"docs/core/connect-data-platform/upsolver-setup",
"docs/core/connect-data-platform/starrocks-setup",
+ "docs/core/connect-data-platform/extrica-setup",
],
},
],
diff --git a/website/static/img/blog/2024-01-09-defer-in-development/defer-toggle.png b/website/static/img/blog/2024-01-09-defer-in-development/defer-toggle.png
new file mode 100644
index 00000000000..7161dc68b93
Binary files /dev/null and b/website/static/img/blog/2024-01-09-defer-in-development/defer-toggle.png differ
diff --git a/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-defer.png b/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-defer.png
new file mode 100644
index 00000000000..7ec96a7b598
Binary files /dev/null and b/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-defer.png differ
diff --git a/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-full.png b/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-full.png
new file mode 100644
index 00000000000..4381a13abed
Binary files /dev/null and b/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-full.png differ
diff --git a/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-mixed.png b/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-mixed.png
new file mode 100644
index 00000000000..1020c3b65f0
Binary files /dev/null and b/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-mixed.png differ
diff --git a/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-model-c.png b/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-model-c.png
new file mode 100644
index 00000000000..3f48255ac12
Binary files /dev/null and b/website/static/img/blog/2024-01-09-defer-in-development/prod-and-dev-model-c.png differ
diff --git a/website/static/img/blog/2024-01-09-defer-in-development/prod-environment-plain.png b/website/static/img/blog/2024-01-09-defer-in-development/prod-environment-plain.png
new file mode 100644
index 00000000000..5c2860411ec
Binary files /dev/null and b/website/static/img/blog/2024-01-09-defer-in-development/prod-environment-plain.png differ
diff --git a/website/static/img/blog/2024-01-09-defer-in-development/willem.png b/website/static/img/blog/2024-01-09-defer-in-development/willem.png
new file mode 100644
index 00000000000..bd38e9b0bd4
Binary files /dev/null and b/website/static/img/blog/2024-01-09-defer-in-development/willem.png differ
diff --git a/website/vercel.json b/website/vercel.json
index 35799e24061..f9dd018357b 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -3847,11 +3847,6 @@
"destination": "/dbt-cloud/api",
"permanent": true
},
- {
- "source": "/reference/data-test-configs",
- "destination": "/reference/test-configs",
- "permanent": true
- },
{
"source": "/reference/declaring-properties",
"destination": "/reference/configs-and-properties",