Skip to content

Commit

Permalink
Merge branch 'current' into partner_integration_guide
Browse files Browse the repository at this point in the history
  • Loading branch information
amychen1776 authored Dec 19, 2023
2 parents 326af16 + 1b3f543 commit 5733e92
Show file tree
Hide file tree
Showing 15 changed files with 244 additions and 98 deletions.
4 changes: 4 additions & 0 deletions website/dbt-versions.js
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,10 @@ exports.versionedPages = [
{
"page": "docs/build/saved-queries",
"firstVersion": "1.7",
},
{
"page": "reference/resource-configs/on_configuration_change",
"firstVersion": "1.6",
}
]

Expand Down
4 changes: 2 additions & 2 deletions website/docs/docs/build/jinja-macros.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,8 @@ group by 1

You can recognize Jinja based on the delimiters the language uses, which we refer to as "curlies":
- **Expressions `{{ ... }}`**: Expressions are used when you want to output a string. You can use expressions to reference [variables](/reference/dbt-jinja-functions/var) and call [macros](/docs/build/jinja-macros#macros).
- **Statements `{% ... %}`**: Statements are used for control flow, for example, to set up `for` loops and `if` statements, or to define macros.
- **Comments `{# ... #}`**: Jinja comments are used to prevent the text within the comment from compiling.
- **Statements `{% ... %}`**: Statements don't output a string. They are used for control flow, for example, to set up `for` loops and `if` statements, to [set](https://jinja.palletsprojects.com/en/3.1.x/templates/#assignments) or [modify](https://jinja.palletsprojects.com/en/3.1.x/templates/#expression-statement) variables, or to define macros.
- **Comments `{# ... #}`**: Jinja comments are used to prevent the text within the comment from executing or outputing a string.

When used in a dbt model, your Jinja needs to compile to a valid query. To check what SQL your Jinja compiles to:
* **Using dbt Cloud:** Click the compile button to see the compiled SQL in the Compiled SQL pane
Expand Down
47 changes: 25 additions & 22 deletions website/docs/docs/build/materializations.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,27 +109,7 @@ When using the `table` materialization, your model is rebuilt as a <Term id="tab

### Materialized View

The `materialized view` materialization allows the creation and maintenance of materialized views
in the target database. This materialization makes use of the `on_configuration_change` config, which
aligns with the incremental nature of the namesake database object. This setting tells dbt to attempt to
make configuration changes directly to the object when possible, as opposed to completely recreating
the object to implement the updated configuration. Using `dbt-postgres` as an example, indexes can
be dropped and created on the materialized view without the need to recreate the materialized view itself.

The `on_configuration_change` config has three settings:
- `apply` (default) &mdash; attempt to update the existing database object if possible, avoiding a complete rebuild
- *Note:* if any individual configuration change requires a full refresh, a full refresh be performed in lieu of individual alter statements
- `continue` &mdash; allow runs to continue while also providing a warning that the object was left untouched
- *Note:* this could result in downstream failures as those models may depend on these unimplemented changes
- `fail` &mdash; force the entire run to fail if a change is detected

Materialized views are implemented following this "drop through" life cycle:
1. If an object does not exist, create a materialized view
2. If an object exists, other than a materialized view, that object is dropped and replaced with a materialized view
3. If `--full-refresh` is supplied, replace the materialized view regardless of changes and the `on_configuration_change` setting
4. If there are no configuration changes, refresh the materialized view
5. At this point there are configuration changes, proceed according to the `on_configuration_change` setting

The `materialized view` materialization allows the creation and maintenance of materialized views in the target database.
Materialized views are a combination of a view and a table, and serve use cases similar to incremental models.

* **Pros:**
Expand All @@ -145,7 +125,30 @@ less configuration options available, see your database platform's docs for more
* **Advice:**
* Consider materialized views for use cases where incremental models are sufficient, but you would like the data platform to manage the incremental logic and refresh.

**Note:** `dbt-snowflake` _does not_ support materialized views, it uses Dynamic Tables instead. For details, refer to [Snowflake specific configurations](/reference/resource-configs/snowflake-configs#dynamic-tables).
#### Configuration Change Monitoring

This materialization makes use of the [`on_configuration_change`](/reference/resource-configs/on_configuration_change)
config, which aligns with the incremental nature of the namesake database object. This setting tells dbt to attempt to
make configuration changes directly to the object when possible, as opposed to completely recreating
the object to implement the updated configuration. Using `dbt-postgres` as an example, indexes can
be dropped and created on the materialized view without the need to recreate the materialized view itself.

#### Scheduled Refreshes

In the context of a `dbt run` command, materialized views should be thought of as similar to views.
For example, a `dbt run` command is only needed if there is the potential for a change in configuration or sql;
it's effectively a deploy action.
By contrast, a `dbt run` command is needed for a table in the same scenarios *AND when the data in the table needs to be updated*.
This also holds true for incremental and snapshot models, whose underlying relations are tables.
In the table cases, the scheduling mechanism is either dbt Cloud or your local scheduler;
there is no built-in functionality to automatically refresh the data behind a table.
However, most platforms (Postgres excluded) provide functionality to configure automatically refreshing a materialized view.
Hence, materialized views work similarly to incremental models with the benefit of not needing to run dbt to refresh the data.
This assumes, of course, that auto refresh is turned on and configured in the model.

:::info
`dbt-snowflake` _does not_ support materialized views, it uses Dynamic Tables instead. For details, refer to [Snowflake specific configurations](/reference/resource-configs/snowflake-configs#dynamic-tables).
:::

## Python materializations

Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/metrics-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ This page explains the different supported metric types you can add to your dbt
### Cumulative metrics
[Cumulative metrics](/docs/build/cumulative) aggregate a measure over a given window. If no window is specified, the window would accumulate the measure over all time. **Note**m, you will need to create the [time spine model](/docs/build/metricflow-time-spine) before you add cumulative metrics.
[Cumulative metrics](/docs/build/cumulative) aggregate a measure over a given window. If no window is specified, the window would accumulate the measure over all time. **Note**, you will need to create the [time spine model](/docs/build/metricflow-time-spine) before you add cumulative metrics.
```yaml
# Cumulative metrics aggregate a measure over a given window. The window is considered infinite if no window parameter is passed (accumulate the measure over all time)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ To improve your experience using dbt Cloud, we suggest that you turn off ad bloc

## dbt Cloud IDE features

The dbt Cloud IDE comes with [features](/docs/cloud/dbt-cloud-ide/ide-user-interface) that make it easier for you to develop, build, compile, run, and test data models.
The dbt Cloud IDE comes with features that make it easier for you to develop, build, compile, run, and test data models.

To understand how to navigate the IDE and its user interface elements, refer to the [IDE user interface](/docs/cloud/dbt-cloud-ide/ide-user-interface) page.

Expand Down
30 changes: 20 additions & 10 deletions website/docs/docs/dbt-cloud-apis/sl-jdbc.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,17 +165,17 @@ select * from {{
## Querying the API for metric values
To query metric values, here are the following parameters that are available:
To query metric values, here are the following parameters that are available. Your query must have _either_ a `metric` **or** a `group_by` parameter to be valid.
| Parameter | Description | Example | Type |
| --------- | -----------| ------------ | -------------------- |
| `metrics` | The metric name as defined in your dbt metric configuration | `metrics=['revenue']` | Required |
| `group_by` | Dimension names or entities to group by. We require a reference to the entity of the dimension (other than for the primary time dimension), which is pre-appended to the front of the dimension name with a double underscore. | `group_by=['user__country', 'metric_time']` | Optional |
| `grain` | A parameter specific to any time dimension and changes the grain of the data from the default for the metric. | `group_by=[Dimension('metric_time')` <br/> `grain('week\|day\|month\|quarter\|year')]` | Optional |
| `where` | A where clause that allows you to filter on dimensions and entities using parameters. This takes a filter list OR string. Inputs come with `Dimension`, and `Entity` objects. Granularity is required if the `Dimension` is a time dimension | `"{{ where=Dimension('customer__country') }} = 'US')"` | Optional |
| `limit` | Limit the data returned | `limit=10` | Optional |
|`order` | Order the data returned by a particular field | `order_by=['order_gross_profit']`, use `-` for descending, or full object notation if the object is operated on: `order_by=[Metric('order_gross_profit').descending(True)`] | Optional |
| `compile` | If true, returns generated SQL for the data platform but does not execute | `compile=True` | Optional |
| Parameter | Description | Example |
| --------- | -----------| ------------ |
| `metrics` | The metric name as defined in your dbt metric configuration | `metrics=['revenue']` |
| `group_by` | Dimension names or entities to group by. We require a reference to the entity of the dimension (other than for the primary time dimension), which is pre-appended to the front of the dimension name with a double underscore. | `group_by=['user__country', 'metric_time']` |
| `grain` | A parameter specific to any time dimension and changes the grain of the data from the default for the metric. | `group_by=[Dimension('metric_time')` <br/> `grain('week\|day\|month\|quarter\|year')]` |
| `where` | A where clause that allows you to filter on dimensions and entities using parameters. This takes a filter list OR string. Inputs come with `Dimension`, and `Entity` objects. Granularity is required if the `Dimension` is a time dimension | `"{{ where=Dimension('customer__country') }} = 'US')"` |
| `limit` | Limit the data returned | `limit=10` |
|`order` | Order the data returned by a particular field | `order_by=['order_gross_profit']`, use `-` for descending, or full object notation if the object is operated on: `order_by=[Metric('order_gross_profit').descending(True)`] |
| `compile` | If true, returns generated SQL for the data platform but does not execute | `compile=True` |
Expand Down Expand Up @@ -248,6 +248,16 @@ select * from {{
}}
```
### Query only a dimension
In this case, you'll get the full list of dimension values for the chosen dimension.
```bash
select * from {{
semantic_layer.query(group_by=['customer__customer_type'])
}}
```
### Query with where filters
Where filters in API allow for a filter list or string. We recommend using the filter list for production applications as this format will realize all benefits from the <Term id="predicate-pushdown" /> where possible.
Expand Down
11 changes: 7 additions & 4 deletions website/docs/guides/sl-partner-integration-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,14 @@ We recommend you provide users with separate input fields with these components

### Exposing metadata to dbt Labs

When building an integration, we recommend you expose certain metadata in the request for analytics purposes. Among other items, it is helpful to have the following:
When building an integration, we recommend you expose certain metadata in the request for analytics and troubleshooting purpose.

Please send us the following header with every query:

`'X-dbt-partner-source': 'Your-Application-Name'`

Additionally, it would be helpful if you also included the email and username of the person generating the query from your application.

- Your application's name (such as 'Tableau')
- The email of the person querying your application
- The version of dbt they are on.


## Use best practices when exposing metrics
Expand Down
2 changes: 1 addition & 1 deletion website/docs/reference/commands/init.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Then, it will:

<VersionBlock firstVersion="1.7">

When using `dbt init` to initialize your project, include the `--profile` flag to specify an existing `profiles.yml` as the `profile:` key to use instead of creating a new one. For example, `dbt init --profile`.
When using `dbt init` to initialize your project, include the `--profile` flag to specify an existing `profiles.yml` as the `profile:` key to use instead of creating a new one. For example, `dbt init --profile profile_name`.



Expand Down
34 changes: 17 additions & 17 deletions website/docs/reference/resource-configs/bigquery-configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -725,18 +725,18 @@ The `grant_access_to` config is not thread-safe when multiple views need to be a
The BigQuery adapter supports [materialized views](https://cloud.google.com/bigquery/docs/materialized-views-intro)
with the following configuration parameters:

| Parameter | Type | Required | Default | Change Monitoring Support |
|-------------------------------------------------------------|------------------------|----------|---------|---------------------------|
| `on_configuration_change` | `<string>` | no | `apply` | n/a |
| [`cluster_by`](#clustering-clause) | `[<string>]` | no | `none` | drop/create |
| [`partition_by`](#partition-clause) | `{<dictionary>}` | no | `none` | drop/create |
| [`enable_refresh`](#auto-refresh) | `<boolean>` | no | `true` | alter |
| [`refresh_interval_minutes`](#auto-refresh) | `<float>` | no | `30` | alter |
| [`max_staleness`](#auto-refresh) (in Preview) | `<interval>` | no | `none` | alter |
| [`description`](/reference/resource-properties/description) | `<string>` | no | `none` | alter |
| [`labels`](#specifying-labels) | `{<string>: <string>}` | no | `none` | alter |
| [`hours_to_expiration`](#controlling-table-expiration) | `<integer>` | no | `none` | alter |
| [`kms_key_name`](#using-kms-encryption) | `<string>` | no | `none` | alter |
| Parameter | Type | Required | Default | Change Monitoring Support |
|----------------------------------------------------------------------------------|------------------------|----------|---------|---------------------------|
| [`on_configuration_change`](/reference/resource-configs/on_configuration_change) | `<string>` | no | `apply` | n/a |
| [`cluster_by`](#clustering-clause) | `[<string>]` | no | `none` | drop/create |
| [`partition_by`](#partition-clause) | `{<dictionary>}` | no | `none` | drop/create |
| [`enable_refresh`](#auto-refresh) | `<boolean>` | no | `true` | alter |
| [`refresh_interval_minutes`](#auto-refresh) | `<float>` | no | `30` | alter |
| [`max_staleness`](#auto-refresh) (in Preview) | `<interval>` | no | `none` | alter |
| [`description`](/reference/resource-properties/description) | `<string>` | no | `none` | alter |
| [`labels`](#specifying-labels) | `{<string>: <string>}` | no | `none` | alter |
| [`hours_to_expiration`](#controlling-table-expiration) | `<integer>` | no | `none` | alter |
| [`kms_key_name`](#using-kms-encryption) | `<string>` | no | `none` | alter |

<Tabs
groupId="config-languages"
Expand All @@ -757,7 +757,7 @@ with the following configuration parameters:
models:
[<resource-path>](/reference/resource-configs/resource-path):
[+](/reference/resource-configs/plus-prefix)[materialized](/reference/resource-configs/materialized): materialized_view
[+](/reference/resource-configs/plus-prefix)on_configuration_change: apply | continue | fail
[+](/reference/resource-configs/plus-prefix)[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail
[+](/reference/resource-configs/plus-prefix)[cluster_by](#clustering-clause): <field-name> | [<field-name>]
[+](/reference/resource-configs/plus-prefix)[partition_by](#partition-clause):
- field: <field-name>
Expand Down Expand Up @@ -794,7 +794,7 @@ models:
- name: [<model-name>]
config:
[materialized](/reference/resource-configs/materialized): materialized_view
on_configuration_change: apply | continue | fail
[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail
[cluster_by](#clustering-clause): <field-name> | [<field-name>]
[partition_by](#partition-clause):
- field: <field-name>
Expand Down Expand Up @@ -827,7 +827,7 @@ models:
```jinja
{{ config(
[materialized](/reference/resource-configs/materialized)='materialized_view',
on_configuration_change="apply" | "continue" | "fail",
[on_configuration_change](/reference/resource-configs/on_configuration_change)="apply" | "continue" | "fail",
[cluster_by](#clustering-clause)="<field-name>" | ["<field-name>"],
[partition_by](#partition-clause)={
"field": "<field-name>",
Expand Down Expand Up @@ -868,7 +868,7 @@ models:
Many of these parameters correspond to their table counterparts and have been linked above.
The set of parameters unique to materialized views covers [auto-refresh functionality](#auto-refresh).

Find more information about these parameters in the BigQuery docs:
Learn more about these parameters in BigQuery's docs:
- [CREATE MATERIALIZED VIEW statement](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#create_materialized_view_statement)
- [materialized_view_option_list](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#materialized_view_option_list)

Expand All @@ -886,7 +886,7 @@ BigQuery only officially supports the configuration of the frequency (the "once
however, there is a feature in preview that allows for the configuration of the staleness (the "5 minutes" refresh).
dbt will monitor these parameters for changes and apply them using an `ALTER` statement.

Find more information about these parameters in the BigQuery docs:
Learn more about these parameters in BigQuery's docs:
- [materialized_view_option_list](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#materialized_view_option_list)
- [max_staleness](https://cloud.google.com/bigquery/docs/materialized-views-create#max_staleness)

Expand Down
Loading

0 comments on commit 5733e92

Please sign in to comment.