diff --git a/contributing/content-style-guide.md b/contributing/content-style-guide.md
index 022efa127a5..a8520bc0e0d 100644
--- a/contributing/content-style-guide.md
+++ b/contributing/content-style-guide.md
@@ -624,6 +624,12 @@ When describing icons that appear on-screen, use the [_Google Material Icons_](h
:white_check_mark:Click on the menu icon
+#### Upload icons
+If you're using icons to document things like [third-party vendors](https://docs.getdbt.com/docs/cloud-integrations/avail-sl-integrations), etc. — you need to add the icon file in the following locations to ensure the icons render correctly in light and dark mode:
+
+- website/static/img/icons
+- website/static/img/icons/white
+
### Image names
Two words that are either adjectives or nouns describing the name of a file separated by an underscore `_` (known as `snake_case`). The two words can also be separated by a hyphen (`kebab-case`).
diff --git a/contributing/lightbox.md b/contributing/lightbox.md
index 5f35b4d9639..95feccbe779 100644
--- a/contributing/lightbox.md
+++ b/contributing/lightbox.md
@@ -25,4 +25,9 @@ You can use the Lightbox component to add an image or screenshot to your page. I
/>
```
+Note that if you're using icons to document things like third party vendors, etc, — you need to add the icon file in the following locations to ensure the icons render correctly in light and dark mode:
+
+- `website/static/img/icons`
+- `website/static/img/icons/white`
+
diff --git a/website/docs/docs/build/dimensions.md b/website/docs/docs/build/dimensions.md
index affd74f81aa..7ad52704c4f 100644
--- a/website/docs/docs/build/dimensions.md
+++ b/website/docs/docs/build/dimensions.md
@@ -12,7 +12,7 @@ Dimensions represent the non-aggregatable columns in your data set, which are th
Groups are defined within semantic models, alongside entities and measures, and correspond to non-aggregatable columns in your dbt model that provides categorical or time-based context. In SQL, dimensions is typically included in the GROUP BY clause.-->
-All dimensions require a `name`, `type`, and can optionally include an `expr` parameter. The `name` for your Dimension must be unique wihtin the same semantic model.
+All dimensions require a `name`, `type`, and can optionally include an `expr` parameter. The `name` for your Dimension must be unique within the same semantic model.
| Parameter | Description | Type |
| --------- | ----------- | ---- |
diff --git a/website/docs/docs/build/metricflow-time-spine.md b/website/docs/docs/build/metricflow-time-spine.md
index 5de3221a677..b5687c892b8 100644
--- a/website/docs/docs/build/metricflow-time-spine.md
+++ b/website/docs/docs/build/metricflow-time-spine.md
@@ -8,7 +8,8 @@ tags: [Metrics, Semantic Layer]
It's common in analytics engineering to have a date dimension or "time-spine" table as a base table for different types of time-based joins and aggregations. The structure of this table is typically a base column of daily or hourly dates, with additional columns for other time grains, like fiscal quarters, defined based on the base column. You can join other tables to the time spine on the base column to calculate metrics like revenue at a point in time, or to aggregate to a specific time grain.
-MetricFlow requires you to define a time-spine table as a model-level configuration in the Semantic Layer for time-based joins and aggregations, such as cumulative metrics. This configuration informs dbt which model should be used for time range joins. It is especially useful for cumulative metrics or calculating time-based offsets. The time-spine model is joined to other tables when calculating certain types of metrics or dimensions. MetricFlow will join the time-spine model in the compiled SQL for the following types of metrics and dimensions:
+MetricFlow requires you to define at least one dbt model which provides a time-spine, and then specify (in YAML) the columns to be used for time-based joins. MetricFlow will join against the time-spine model for the following types of metrics and dimensions:
+
- [Cumulative metrics](/docs/build/cumulative)
- [Metric offsets](/docs/build/derived#derived-metric-offset)
- [Conversion metrics](/docs/build/conversion)
@@ -19,20 +20,18 @@ To see the generated SQL for the metric and dimension types that use time-spine
## Configuring time-spine in YAML
-- The time spine is a special model that tells dbt and MetricFlow how to use specific columns by defining their properties.
-- The [`models` key](/reference/model-properties) for the time spine must be in your `models/` directory.
+- Each time spine is a normal dbt model with extra configurations that tell dbt and MetricFlow how to use specific columns by defining their properties.
+- You likely already have a calendar table in your project which you can use. If you don't, review the [example time-spine tables](#example-time-spine-tables) for sample code.
+- You add the configurations under the `time_spine` key for that [model's properties](/reference/model-properties), just as you would add a description or tests.
- You only need to configure time-spine models that the Semantic Layer should recognize.
- At a minimum, define a time-spine table for a daily grain.
-- You can optionally define a time-spine table for a different granularity, like hourly.
-- Note that if you don’t have a date or calendar model in your project, you'll need to create one.
+- You can optionally define additional time-spine tables for different granularities, like hourly. Review the [granularity considerations](#granularity-considerations) when deciding which tables to create.
- If you're looking to specify the grain of a time dimension so that MetricFlow can transform the underlying column to the required granularity, refer to the [Time granularity documentation](/docs/build/dimensions?dimension=time_gran)
-If you already have a date dimension or time-spine table in your dbt project, you can point MetricFlow to this table by updating the `model` configuration to use this table in the Semantic Layer. This is a model-level configuration that tells dbt to use the model for time range joins in the Semantic Layer.
-
For example, given the following directory structure, you can create two time spine configurations, `time_spine_hourly` and `time_spine_daily`. MetricFlow supports granularities ranging from milliseconds to years. Refer to the [Dimensions page](/docs/build/dimensions?dimension=time_gran#time) (time_granularity tab) to find the full list of supported granularities.
-:::tip
+:::tip
Previously, you had to create a model called `metricflow_time_spine` in your dbt project. Now, if your project already includes a date dimension or time spine table, you can simply configure MetricFlow to use that table by updating the `model` setting in the Semantic Layer.
If you don’t have a date dimension table, you can still create one by using the code snippet below to build your time spine model.
@@ -46,34 +45,38 @@ If you don’t have a date dimension table, you can still create one by using th
```yaml
[models:](/reference/model-properties)
- name: time_spine_hourly
+ description: A date spine with one row per hour, ranging from 2020-01-01 to 2039-12-31.
time_spine:
standard_granularity_column: date_hour # column for the standard grain of your table
columns:
- name: date_hour
granularity: hour # set granularity at column-level for standard_granularity_column
+
- name: time_spine_daily
+ description: A date spine with one row per day, ranging from 2020-01-01 to 2039-12-31.
time_spine:
standard_granularity_column: date_day # column for the standard grain of your table
columns:
- name: date_day
granularity: day # set granularity at column-level for standard_granularity_column
```
+
-For an example project, refer to our [Jaffle shop](https://github.com/dbt-labs/jaffle-sl-template/blob/main/models/marts/_models.yml) example. Note that the [`models` key](/reference/model-properties) in the time spine configuration must be placed in your `models/` directory.
+For an example project, refer to our [Jaffle shop](https://github.com/dbt-labs/jaffle-sl-template/blob/main/models/marts/_models.yml) example.
-Now, break down the configuration above. It's pointing to a model called `time_spine_daily`. It sets the time spine configurations under the `time_spine` key. The `standard_granularity_column` is the lowest grain of the table, in this case, it's hourly. It needs to reference a column defined under the columns key, in this case, `date_hour`. Use the `standard_granularity_column` as the join key for the time spine table when joining tables in MetricFlow. Here, the granularity of the `standard_granularity_column` is set at the column level, in this case, `hour`.
+Now, break down the configuration above. It's pointing to a model called `time_spine_daily`, and all the configuration is colocated with the rest of the [model's properties](/reference/model-properties). It sets the time spine configurations under the `time_spine` key. The `standard_granularity_column` is the lowest grain of the table, in this case, it's hourly. It needs to reference a column defined under the columns key, in this case, `date_hour`. Use the `standard_granularity_column` as the join key for the time spine table when joining tables in MetricFlow. Here, the granularity of the `standard_granularity_column` is set at the column level, in this case, `hour`.
+### Considerations when choosing which granularities to create{#granularity-considerations}
-If you need to create a time spine table from scratch, you can do so by adding the following code to your dbt project.
-The example creates a time spine at a daily grain and an hourly grain. A few things to note when creating time spine models:
-* MetricFlow will use the time spine with the largest compatible granularity for a given query to ensure the most efficient query possible. For example, if you have a time spine at a monthly grain, and query a dimension at a monthly grain, MetricFlow will use the monthly time spine. If you only have a daily time spine, MetricFlow will use the daily time spine and date_trunc to month.
-* You can add a time spine for each granularity you intend to use if query efficiency is more important to you than configuration time, or storage constraints. For most engines, the query performance difference should be minimal and transforming your time spine to a coarser grain at query time shouldn't add significant overhead to your queries.
-* We recommend having a time spine at the finest grain used in any of your dimensions to avoid unexpected errors. i.e., if you have dimensions at an hourly grain, you should have a time spine at an hourly grain.
+- MetricFlow will use the time spine with the largest compatible granularity for a given query to ensure the most efficient query possible. For example, if you have a time spine at a monthly grain, and query a dimension at a monthly grain, MetricFlow will use the monthly time spine. If you only have a daily time spine, MetricFlow will use the daily time spine and date_trunc to month.
+- You can add a time spine for each granularity you intend to use if query efficiency is more important to you than configuration time, or storage constraints. For most engines, the query performance difference should be minimal and transforming your time spine to a coarser grain at query time shouldn't add significant overhead to your queries.
+- We recommend having a time spine at the finest grain used in any of your dimensions to avoid unexpected errors. For example, if you have dimensions at an hourly grain, you should have a time spine at an hourly grain.
## Example time-spine tables
### Daily
+
@@ -140,9 +143,11 @@ select * from final
where date_day > dateadd(year, -4, current_timestamp())
and date_hour < dateadd(day, 30, current_timestamp())
```
+
### Daily (BigQuery)
+
Use this model if you're using BigQuery. BigQuery supports `DATE()` instead of `TO_DATE()`:
@@ -170,6 +175,7 @@ from final
where date_day > dateadd(year, -4, current_timestamp())
and date_hour < dateadd(day, 30, current_timestamp())
```
+
@@ -200,12 +206,14 @@ from final
where date_day > dateadd(year, -4, current_timestamp())
and date_hour < dateadd(day, 30, current_timestamp())
```
+
-### Hourly
+### Hourly
+
```sql
@@ -237,4 +245,5 @@ select * from final
where date_day > dateadd(year, -4, current_timestamp())
and date_hour < dateadd(day, 30, current_timestamp())
```
+
diff --git a/website/docs/docs/cloud-integrations/configure-auto-exposures.md b/website/docs/docs/cloud-integrations/configure-auto-exposures.md
index 41448dd5f9e..24364077614 100644
--- a/website/docs/docs/cloud-integrations/configure-auto-exposures.md
+++ b/website/docs/docs/cloud-integrations/configure-auto-exposures.md
@@ -6,12 +6,10 @@ description: "Import and auto-generate exposures from dashboards and understand
image: /img/docs/cloud-integrations/auto-exposures/explorer-lineage2.jpg
---
-# Configure auto-exposures
+# Configure auto-exposures
As a data team, it’s critical that you have context into the downstream use cases and users of your data products. [Auto-exposures](/docs/collaborate/auto-exposures) integrates natively with Tableau and [auto-generates downstream lineage](/docs/collaborate/auto-exposures#view-auto-exposures-in-dbt-explorer) in dbt Explorer for a richer experience.
-:::info Available in beta
-Auto-exposures are currently available in beta to a limited group of users and are gradually being rolled out. If you're interested in gaining access or learning more, stay tuned for updates!
-:::
+
Auto-exposures help data teams optimize their efficiency and ensure data quality by:
- Helping users understand how their models are used in downstream analytics tools to inform investments and reduce incidents — ultimately building trust and confidence in data products.
diff --git a/website/docs/docs/collaborate/auto-exposures.md b/website/docs/docs/collaborate/auto-exposures.md
index 371f6e80248..2b1d649abd1 100644
--- a/website/docs/docs/collaborate/auto-exposures.md
+++ b/website/docs/docs/collaborate/auto-exposures.md
@@ -7,12 +7,9 @@ pagination_next: "docs/collaborate/data-tile"
image: /img/docs/cloud-integrations/auto-exposures/explorer-lineage.jpg
---
-# Auto-exposures
+# Auto-exposures
As a data team, it’s critical that you have context into the downstream use cases and users of your data products. Auto-exposures integrates natively with Tableau (Power BI coming soon) and auto-generates downstream lineage in dbt Explorer for a richer experience.
-:::info Available in beta
-Auto-exposures are currently available in beta to a limited group of users and are gradually being rolled out. If you're interested in gaining access or learning more, stay tuned for updates!
-:::
Auto-exposures helps users understand how their models are used in downstream analytics tools to inform investments and reduce incidents — ultimately building trust and confidence in data products. It imports and auto-generates exposures based on Tableau dashboards, with user-defined curation.
diff --git a/website/docs/docs/collaborate/explore-projects.md b/website/docs/docs/collaborate/explore-projects.md
index 3af5e9886f8..9e27c2afa47 100644
--- a/website/docs/docs/collaborate/explore-projects.md
+++ b/website/docs/docs/collaborate/explore-projects.md
@@ -20,9 +20,9 @@ import ExplorerCourse from '/snippets/_explorer-course-link.md';
- You have at least one successful job run in the deployment environment. Note that [CI jobs](/docs/deploy/ci-jobs) do not update dbt Explorer.
- You are on the dbt Explorer page. To do this, select **Explore** from the navigation in dbt Cloud.
-## Overview page
+## Overview page
-Navigate the dbt Explorer overview page to access your project's resources and metadata, available in beta. The page includes the following sections:
+Navigate the dbt Explorer overview page to access your project's resources and metadata. The page includes the following sections:
- **Search bar** — [Search](#search-resources) for resources in your project by keyword. You can also use filters to refine your search results.
- **Sidebar** — Use the left sidebar to access model [performance](/docs/collaborate/model-performance), [project recommendations](/docs/collaborate/project-recommendations) in the **Project details** section. Browse your project's [resources, file tree, and database](#browse-with-the-sidebar) in the lower section of the sidebar.
@@ -96,7 +96,7 @@ To explore the lineage graphs of tests and macros, view [their resource details
### Example of full lineage graph
-Example of exploring the `order_items` model in the project's lineage graph:
+Example of exploring a model in the project's lineage graph:
@@ -162,12 +162,64 @@ Under the the **Models** option, you can filter on model properties (access or m
+
+
+Trust signal icons offer a quick, at-a-glance view of data health when browsing your models in dbt Explorer. These icons keep you informed on the status of your model's health using the indicators **Healthy**, **Caution**, **Degraded**, and **Unknown**. For accurate health data, ensure the resource is up-to-date and has had a recent job run.
+
+Each trust signal icon reflects key data health components, such as test success status, missing resource descriptions, absence of builds in 30-day windows, and more.
+
+To access trust signals:
+- Use the search function or click on **Models** or **Sources** under the **Resource** tab.
+- View the icons under the **Health** column.
+- Hover over or click the trust signal to see detailed information.
+- For sources, the trust signal also indicates the source freshness status.
+
+
+
+
+
+
+
### Example of keyword search
-Example of results from searching on the keyword `item` and applying the filters models, description, and code:
+Example of results from searching on the keyword `customers` and applying the filters models, description, and code. Trust signals are visible to the right of the model name in the search results.
-
## Browse with the sidebar
From the sidebar, you can browse your project's resources, its file tree, and the database.
@@ -201,6 +253,7 @@ In the upper right corner of the resource details page, you can:
+- Trust signal icon — Icons offering a quick, at-a-glance view of data health. These icons indicate whether a model is Healthy, Caution, Degraded, or Unknown. Hover over an icon to view detailed information about the model's health.
- **Status bar** (below the page title) — Information on the last time the model ran, whether the run was successful, how the data is materialized, number of rows, and the size of the model.
- **General** tab includes:
- **Lineage** graph — The model’s lineage graph that you can interact with. The graph includes one upstream node and one downstream node from the model. Click the Expand icon in the graph's upper right corner to view the model in full lineage graph mode.
diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md
index 7c2614b2c10..11fdfd4dedf 100644
--- a/website/docs/docs/dbt-versions/release-notes.md
+++ b/website/docs/docs/dbt-versions/release-notes.md
@@ -18,8 +18,14 @@ Release notes are grouped by month for both multi-tenant and virtual private clo
\* The official release date for this new format of release notes is May 15th, 2024. Historical release notes for prior dates may not reflect all available features released earlier this year or their tenancy availability.
+## October 2024
+
+- **New:** dbt Explorer now includes trust signal icons, which is currently available as a [Preview](/docs/dbt-versions/product-lifecycles#dbt-cloud). Trust signals offer a quick, at-a-glance view of data health when browsing your dbt models in Explorer. These icons indicate whether a model is **Healthy**, **Caution**, **Degraded**, or **Unknown**. For accurate health data, ensure the resource is up-to-date and has had a recent job run. Refer to [Trust signals](/docs/collaborate/explore-projects#trust-signals-for-resources) for more information.
+- **New:** Auto exposures are now available in Preview in dbt Cloud. Auto-exposures helps users understand how their models are used in downstream analytics tools to inform investments and reduce incidents. It imports and auto-generates exposures based on Tableau dashboards, with user-defined curation. To learn more, refer to [Auto exposures](/docs/collaborate/auto-exposures).
+
## September 2024
+- **New**: Use the new recommended syntax for [defining `foreign_key` constraints](/reference/resource-properties/constraints) using `refs`, available in dbt Cloud Versionless. This will soon be released in dbt Core v1.9. This new syntax will capture dependencies and works across different environments.
- **Enhancement**: You can now run [Semantic Layer commands](/docs/build/metricflow-commands) commands in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud). The supported commands are `dbt sl list`, `dbt sl list metrics`, `dbt sl list dimension-values`, `dbt sl list saved-queries`, `dbt sl query`, `dbt sl list dimensions`, `dbt sl list entities`, and `dbt sl validate`.
- **New**: Microsoft Excel, a dbt Semantic Layer integration, is now generally available. The integration allows you to connect to Microsoft Excel to query metrics and collaborate with your team. Available for [Excel Desktop](https://pages.store.office.com/addinsinstallpage.aspx?assetid=WA200007100&rs=en-US&correlationId=4132ecd1-425d-982d-efb4-de94ebc83f26) or [Excel Online](https://pages.store.office.com/addinsinstallpage.aspx?assetid=WA200007100&rs=en-US&correlationid=4132ecd1-425d-982d-efb4-de94ebc83f26&isWac=True). For more information, refer to [Microsoft Excel](/docs/cloud-integrations/semantic-layer/excel).
- **New**: [Data health tile](/docs/collaborate/data-tile) is now generally available in dbt Explorer. Data health tiles provide a quick at-a-glance view of your data quality, highlighting potential issues in your data. You can embed these tiles in your dashboards to quickly identify and address data quality issues in your dbt project.
diff --git a/website/docs/guides/adapter-creation.md b/website/docs/guides/adapter-creation.md
index 066d27a7aaa..278e2a9fe14 100644
--- a/website/docs/guides/adapter-creation.md
+++ b/website/docs/guides/adapter-creation.md
@@ -558,7 +558,7 @@ See [this GitHub discussion](https://github.com/dbt-labs/dbt-core/discussions/54
### Behavior change flags
-Starting in `dbt-adapters==1.5.0` and `dbt-core==1.8.7`, adapter maintainers can implement their own behavior change flags. Refer to [Behavior changes](https://docs.getdbt.com/reference/global-configs/behavior-changes)for more information.
+Starting in `dbt-adapters==1.5.0` and `dbt-core==1.8.7`, adapter maintainers can implement their own behavior change flags. Refer to [Behavior changes](https://docs.getdbt.com/reference/global-configs/behavior-changes) for more information.
Behavior Flags are not intended to be long-living feature flags. They should be implemented with the expectation that the behavior will be the default within an expected period of time. To implement a behavior change flag, you must provide a name for the flag, a default setting (`True` / `False`), an optional source, and a description and/or a link to the flag's documentation on docs.getdbt.com.
diff --git a/website/docs/reference/model-properties.md b/website/docs/reference/model-properties.md
index 7576fc350f8..9ec0c667360 100644
--- a/website/docs/reference/model-properties.md
+++ b/website/docs/reference/model-properties.md
@@ -2,9 +2,9 @@
title: Model properties
---
-Models properties can be declared in `.yml` files in your `models/` directory (as defined by the [`model-paths` config](/reference/project-configs/model-paths)).
+Models properties can be declared in `.yml` files in your `models/` directory (as defined by the [`model-paths` config](/reference/project-configs/model-paths)).
-You can name these files `whatever_you_want.yml`, and nest them arbitrarily deeply in subfolders within the `models/` directory. The [MetricFlow time spine](/docs/build/metricflow-time-spine) is a model property that tells dbt and MetricFlow how to use specific columns by defining their properties.
+You can name these files `whatever_you_want.yml`, and nest them arbitrarily deeply in subfolders within the `models/` directory.
@@ -38,9 +38,15 @@ models:
-
- ... # declare additional data tests
[tags](/reference/resource-configs/tags): []
+
+ # only required in conjunction with time_spine key
+ granularity: <[any supported time granularity](/docs/build/dimensions?dimension=time_gran)>
- name: ... # declare properties of additional columns
+ [time_spine](/docs/build/metricflow-time-spine):
+ standard_granularity_column:
+
[versions](/reference/resource-properties/versions):
- [v](/reference/resource-properties/versions#v): # required
[defined_in](/reference/resource-properties/versions#defined-in):
diff --git a/website/docs/reference/resource-properties/constraints.md b/website/docs/reference/resource-properties/constraints.md
index ff52a1fbcf4..948fe223d68 100644
--- a/website/docs/reference/resource-properties/constraints.md
+++ b/website/docs/reference/resource-properties/constraints.md
@@ -21,16 +21,61 @@ The structure of a constraint is:
- `type` (required): one of `not_null`, `unique`, `primary_key`, `foreign_key`, `check`, `custom`
- `expression`: Free text input to qualify the constraint. Required for certain constraint types, and optional for others.
- `name` (optional): Human-friendly name for this constraint. Supported by some data platforms.
-- `columns` (model-level only): List of column names to apply the constraint over
+- `columns` (model-level only): List of column names to apply the constraint over.
-
+
+
+Foreign key constraints accept two additional inputs:
+- `to`: A relation input, likely `ref()`, indicating the referenced table.
+- `to_columns`: A list of column(s) in that table containing the corresponding primary or unique key.
-When using `foreign_key`, you need to specify the referenced table's schema manually. Use `{{ target.schema }}` in the `expression` field to automatically pass the schema used by the target environment. Note that later versions of dbt will have more efficient ways of handling this.
+This syntax for defining foreign keys uses `ref`, meaning it will capture dependencies and works across different environments. It's available in [dbt Cloud Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) and versions of dbt Core starting with v1.9.
-For example: `expression: "{{ target.schema }}.customers(customer_id)"`
+
+
+```yml
+models:
+ - name:
+
+ # required
+ config:
+ contract: {enforced: true}
+
+ # model-level constraints
+ constraints:
+ - type: primary_key
+ columns: [first_column, second_column, ...]
+ - type: foreign_key # multi_column
+ columns: [first_column, second_column, ...]
+ to: "{{ ref('other_model_name') }}"
+ to_columns: [other_model_first_column, other_model_second_columns, ...]
+ - type: check
+ columns: [first_column, second_column, ...]
+ expression: "first_column != second_column"
+ name: human_friendly_name
+ - type: ...
+
+ columns:
+ - name: first_column
+ data_type: string
+
+ # column-level constraints
+ constraints:
+ - type: not_null
+ - type: unique
+ - type: foreign_key
+ to: "{{ ref('other_model_name') }}"
+ to_columns: other_model_column
+ - type: ...
+```
+
+
+
+In older versions of dbt Core, when defining a `foreign_key` constraint, you need to manually specify the referenced table in the `expression` field. You can use `{{ target }}` variables to make this expression environment-aware, but the dependency between this model and the referenced table is not captured. Starting in dbt Core v1.9, you can specify the referenced table using the `ref()` function.
+
```yml
@@ -39,44 +84,43 @@ models:
# required
config:
- contract:
- enforced: true
+ contract: {enforced: true}
# model-level constraints
constraints:
- type: primary_key
- columns: [FIRST_COLUMN, SECOND_COLUMN, ...]
- - type: FOREIGN_KEY # multi_column
- columns: [FIRST_COLUMN, SECOND_COLUMN, ...]
- expression: "OTHER_MODEL_SCHEMA.OTHER_MODEL_NAME (OTHER_MODEL_FIRST_COLUMN, OTHER_MODEL_SECOND_COLUMN, ...)"
+ columns: [first_column, second_column, ...]
+ - type: foreign_key # multi_column
+ columns: [first_column, second_column, ...]
+ expression: "{{ target.schema }}.other_model_name (other_model_first_column, other_model_second_column, ...)"
- type: check
- columns: [FIRST_COLUMN, SECOND_COLUMN, ...]
- expression: "FIRST_COLUMN != SECOND_COLUMN"
- name: HUMAN_FRIENDLY_NAME
+ columns: [first_column, second_column, ...]
+ expression: "first_column != second_column"
+ name: human_friendly_name
- type: ...
columns:
- - name: FIRST_COLUMN
- data_type: DATA_TYPE
+ - name: first_column
+ data_type: string
# column-level constraints
constraints:
- type: not_null
- type: unique
- type: foreign_key
- expression: OTHER_MODEL_SCHEMA.OTHER_MODEL_NAME (OTHER_MODEL_COLUMN)
+ expression: "{{ target.schema }}.other_model_name (other_model_column)"
- type: ...
```
-
+
## Platform-specific support
In transactional databases, it is possible to define "constraints" on the allowed values of certain columns, stricter than just the data type of those values. For example, Postgres supports and enforces all the constraints in the ANSI SQL standard (`not null`, `unique`, `primary key`, `foreign key`), plus a flexible row-level `check` constraint that evaluates to a boolean expression.
-Most analytical data platforms support and enforce a `not null` constraint, but they either do not support or do not enforce the rest. It is sometimes still desirable to add an "informational" constraint, knowing it is _not_ enforced, for the purpose of integrating with legacy data catalog or entity-relation diagram tools ([dbt-core#3295](https://github.com/dbt-labs/dbt-core/issues/3295)).
+Most analytical data platforms support and enforce a `not null` constraint, but they either do not support or do not enforce the rest. It is sometimes still desirable to add an "informational" constraint, knowing it is _not_ enforced, for the purpose of integrating with legacy data catalog or entity-relation diagram tools ([dbt-core#3295](https://github.com/dbt-labs/dbt-core/issues/3295)). Some data platforms can optionally use primary or foreign key constraints for query optimization if you specify an additional keyword.
To that end, there are two optional fields you can specify on any filter:
- `warn_unenforced: False` to skip warning on constraints that are supported, but not enforced, by this data platform. The constraint will be included in templated DDL.
@@ -244,7 +288,7 @@ select
Snowflake suppports four types of constraints: `unique`, `not null`, `primary key`, and `foreign key`.
It is important to note that only the `not null` (and the `not null` property of `primary key`) are actually checked at present.
-The rest of the constraints are purely metadata, not verified when inserting data.
+The rest of the constraints are purely metadata, not verified when inserting data. Although Snowflake does not validate `unique`, `primary`, or `foreign_key` constraints, you may optionally instruct Snowflake to use them for query optimization by specifying [`rely`](https://docs.snowflake.com/en/user-guide/join-elimination) in the constraint `expression` field.
Currently, Snowflake doesn't support the `check` syntax and dbt will skip the `check` config and raise a warning message if it is set on some models in the dbt project.
diff --git a/website/static/img/docs/collaborate/dbt-explorer/example-keyword-search.png b/website/static/img/docs/collaborate/dbt-explorer/example-keyword-search.png
index 1e98008f46d..de32348b4b0 100644
Binary files a/website/static/img/docs/collaborate/dbt-explorer/example-keyword-search.png and b/website/static/img/docs/collaborate/dbt-explorer/example-keyword-search.png differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/trust-signal-caution.png b/website/static/img/docs/collaborate/dbt-explorer/trust-signal-caution.png
new file mode 100644
index 00000000000..0842bd25ae2
Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/trust-signal-caution.png differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/trust-signal-health.jpg b/website/static/img/docs/collaborate/dbt-explorer/trust-signal-health.jpg
new file mode 100644
index 00000000000..3630a095245
Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/trust-signal-health.jpg differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/trust-signal-healthy.png b/website/static/img/docs/collaborate/dbt-explorer/trust-signal-healthy.png
new file mode 100644
index 00000000000..2de1cf99cf2
Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/trust-signal-healthy.png differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/trust-signal-unknown.png b/website/static/img/docs/collaborate/dbt-explorer/trust-signal-unknown.png
new file mode 100644
index 00000000000..9f2636e5087
Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/trust-signal-unknown.png differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/trust-signals-degraded.jpg b/website/static/img/docs/collaborate/dbt-explorer/trust-signals-degraded.jpg
new file mode 100644
index 00000000000..30aa51d68ef
Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/trust-signals-degraded.jpg differ