Skip to content

Commit

Permalink
This branch was auto-updated!
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] authored Oct 1, 2024
2 parents e8ace06 + 2d9838c commit aba8b9f
Show file tree
Hide file tree
Showing 19 changed files with 256 additions and 139 deletions.
3 changes: 0 additions & 3 deletions website/docs/docs/build/data-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,6 @@ The name of this test is the name of the file: `assert_total_payment_amount_is_p

Singular data tests are easy to write—so easy that you may find yourself writing the same basic structure over and over, only changing the name of a column or model. By that point, the test isn't so singular! In that case, we recommend...



## Generic data tests
Certain data tests are generic: they can be reused over and over again. A generic data test is defined in a `test` block, which contains a parametrized query and accepts arguments. It might look like:

Expand Down Expand Up @@ -304,7 +302,6 @@ data_tests:

</File>

To suppress warnings about the rename, add `TestsConfigDeprecation` to the `silence` block of the `warn_error_options` flag in `dbt_project.yml`, [as described in the Warnings documentation](https://docs.getdbt.com/reference/global-configs/warnings).

</VersionBlock>

Expand Down
13 changes: 12 additions & 1 deletion website/docs/docs/build/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,18 @@ The events in this table are recorded by [Snowplow](http://github.com/snowplow/s
In the above example, a docs block named `table_events` is defined with some descriptive markdown contents. There is nothing significant about the name `table_events` — docs blocks can be named however you like, as long as the name only contains alphanumeric and underscore characters and does not start with a numeric character.

### Placement
Docs blocks should be placed in files with a `.md` file extension. By default, dbt will search in all resource paths for docs blocks (i.e. the combined list of [model-paths](/reference/project-configs/model-paths), [seed-paths](/reference/project-configs/seed-paths), [analysis-paths](/reference/project-configs/analysis-paths), [macro-paths](/reference/project-configs/macro-paths) and [snapshot-paths](/reference/project-configs/snapshot-paths)) — you can adjust this behavior using the [docs-paths](/reference/project-configs/docs-paths) config.

<VersionBlock firstVersion="1.9">

Docs blocks should be placed in files with a `.md` file extension. By default, dbt will search in all resource paths for docs blocks (for example, the combined list of [model-paths](/reference/project-configs/model-paths), [seed-paths](/reference/project-configs/seed-paths), [analysis-paths](/reference/project-configs/analysis-paths), [test-paths](/reference/project-configs/test-paths), [macro-paths](/reference/project-configs/macro-paths), and [snapshot-paths](/reference/project-configs/snapshot-paths)) &mdash; you can adjust this behavior using the [docs-paths](/reference/project-configs/docs-paths) config.

</VersionBlock>

<VersionBlock lastVersion="1.8">

Docs blocks should be placed in files with a `.md` file extension. By default, dbt will search in all resource paths for docs blocks (for example, the combined list of [model-paths](/reference/project-configs/model-paths), [seed-paths](/reference/project-configs/seed-paths), [analysis-paths](/reference/project-configs/analysis-paths), [macro-paths](/reference/project-configs/macro-paths), and [snapshot-paths](/reference/project-configs/snapshot-paths)) &mdash; you can adjust this behavior using the [docs-paths](/reference/project-configs/docs-paths) config.

</VersionBlock>


### Usage
Expand Down
36 changes: 13 additions & 23 deletions website/docs/docs/build/metricflow-commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,6 @@ The following table lists the commands compatible with the dbt Cloud IDE and dbt

| <div style={{width:'250px'}}>Command</div> | <div style={{width:'100px'}}>Description</div> | dbt Cloud IDE | dbt Cloud CLI |
|---------|-------------|---------------|---------------|
| [`list`](#list) | Retrieves metadata values. |||
| [`list metrics`](#list-metrics) | Lists metrics with dimensions. |||
| [`list dimensions`](#list) | Lists unique dimensions for metrics. |||
| [`list dimension-values`](#list-dimension-values) | List dimensions with metrics. |||
Expand Down Expand Up @@ -94,7 +93,6 @@ Check out the following video for a short video demo of how to query or preview

Use the `mf` prefix before the command name to execute them in dbt Core. For example, to list all metrics, run `mf list metrics`.

- [`list`](#list) &mdash; Retrieves metadata values.
- [`list metrics`](#list-metrics) &mdash; Lists metrics with dimensions.
- [`list dimensions`](#list) &mdash; Lists unique dimensions for metrics.
- [`list dimension-values`](#list-dimension-values) &mdash; List dimensions with metrics.
Expand All @@ -107,17 +105,7 @@ Use the `mf` prefix before the command name to execute them in dbt Core. For exa
</TabItem>
</Tabs>

### List

This command retrieves metadata values related to [Metrics](/docs/build/metrics-overview), [Dimensions](/docs/build/dimensions), and [Entities](/docs/build/entities) values.


### List metrics

```bash
dbt sl list # In dbt Cloud
mf list # In dbt Core
```
This command lists the metrics with their available dimensions:

```bash
Expand Down Expand Up @@ -350,13 +338,13 @@ mf query --metrics order_total,users_active --group-by metric_time # In dbt Core
<TabItem value="eg2" label="Dimensions">
You can include multiple dimensions in a query. For example, you can group by the `is_food_order` dimension to confirm if orders were for food or not.
You can include multiple dimensions in a query. For example, you can group by the `is_food_order` dimension to confirm if orders were for food or not. Note that when you query a dimension, you need to specify the primary entity for that dimension. In the following example, the primary entity is `order_id`.
**Query**
```bash
dbt sl query --metrics order_total --group-by metric_time,is_food_order # In dbt Cloud
dbt sl query --metrics order_total --group-by order_id__is_food_order # In dbt Cloud
mf query --metrics order_total --group-by metric_time,is_food_order # In dbt Core
mf query --metrics order_total --group-by order_id__is_food_order # In dbt Core
```
**Result**
Expand All @@ -380,13 +368,15 @@ mf query --metrics order_total --group-by metric_time,is_food_order # In dbt Cor
You can add order and limit functions to filter and present the data in a readable format. The following query limits the data set to 10 records and orders them by `metric_time`, descending. Note that using the `-` prefix will sort the query in descending order. Without the `-` prefix sorts the query in ascending order.
Note that when you query a dimension, you need to specify the primary entity for that dimension. In the following example, the primary entity is `order_id`.
**Query**
```bash
# In dbt Cloud
dbt sl query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time
dbt sl query --metrics order_total --group-by order_id__is_food_order --limit 10 --order-by -metric_time
# In dbt Core
mf query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time
mf query --metrics order_total --group-by order_id__is_food_order --limit 10 --order-by -metric_time
```
**Result**
Expand All @@ -406,15 +396,15 @@ mf query --metrics order_total --group-by metric_time,is_food_order --limit 10 -
<TabItem value="eg4" label="where clause">
You can further filter the data set by adding a `where` clause to your query. The following example shows you how to query the `order_total` metric, grouped by `metric_time` with multiple where statements (orders that are food orders and orders from the week starting on or after Feb 1st, 2024):
You can further filter the data set by adding a `where` clause to your query. The following example shows you how to query the `order_total` metric, grouped by `is_food_order` with multiple where statements (orders that are food orders and orders from the week starting on or after Feb 1st, 2024). Note that when you query a dimension, you need to specify the primary entity for that dimension. In the following example, the primary entity is `order_id`.
**Query**
```bash
# In dbt Cloud
dbt sl query --metrics order_total --group-by metric_time --where "{{ Dimension('order_id__is_food_order') }} = True and metric_time__week >= '2024-02-01'"
dbt sl query --metrics order_total --group-by order_id__is_food_order --where "{{ Dimension('order_id__is_food_order') }} = True and metric_time__week >= '2024-02-01'"
# In dbt Core
mf query --metrics order_total --group-by metric_time --where "{{ Dimension('order_id__is_food_order') }} = True and metric_time__week >= '2024-02-01'"
mf query --metrics order_total --group-by order_id__is_food_order --where "{{ Dimension('order_id__is_food_order') }} = True and metric_time__week >= '2024-02-01'"
```
**Result**
Expand All @@ -440,16 +430,16 @@ mf query --metrics order_total --group-by metric_time --where "{{ Dimension('ord
To filter by time, there are dedicated start and end time options. Using these options to filter by time allows MetricFlow to further optimize query performance by pushing down the where filter when appropriate.
Note that when you query a dimension, you need to specify the primary entity for that dimension. In the following example, the primary entity is `order_id`.
<!--
bash not support in cloud yet
# In dbt Cloud
dbt sl query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27'
dbt sl query --metrics order_total --group-by order_id__is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27'
-->
**Query**
```bash
# In dbt Core
mf query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27'
mf query --metrics order_total --group-by order_id__is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27'
```
**Result**
Expand Down
10 changes: 7 additions & 3 deletions website/docs/docs/build/metricflow-time-spine.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,14 @@ MetricFlow requires you to define a time-spine table as a model-level configurat
To see the generated SQL for the metric and dimension types that use time-spine joins, refer to the respective documentation or add the `compile=True` flag when querying the Semantic Layer to return the compiled SQL.

## Configuring time-spine in YAML

- The time spine is a special model that tells dbt and MetricFlow how to use specific columns by defining their properties.
- The [`models` key](/reference/model-properties) for the time spine must be in your `models/` directory.
- You only need to configure time-spine models that the Semantic Layer should recognize.
- At a minimum, define a time-spine table for a daily grain.
- You can optionally define a time-spine table for a different granularity, like hourly.
- Note that if you don’t have a date or calendar model in your project, you'll need to create one.
- Note that if you don’t have a date or calendar model in your project, you'll need to create one.

- If you're looking to specify the grain of a time dimension so that MetricFlow can transform the underlying column to the required granularity, refer to the [Time granularity documentation](/docs/build/dimensions?dimension=time_gran)

If you already have a date dimension or time-spine table in your dbt project, you can point MetricFlow to this table by updating the `model` configuration to use this table in the Semantic Layer. This is a model-level configuration that tells dbt to use the model for time range joins in the Semantic Layer.
Expand All @@ -40,7 +44,7 @@ If you don’t have a date dimension table, you can still create one by using th
<File name="models/_models.yml">

```yaml
models:
[models:](/reference/model-properties)
- name: time_spine_hourly
time_spine:
standard_granularity_column: date_hour # column for the standard grain of your table
Expand All @@ -56,7 +60,7 @@ models:
```
</File>
For an example project, refer to our [Jaffle shop](https://github.com/dbt-labs/jaffle-sl-template/blob/main/models/marts/_models.yml) example.
For an example project, refer to our [Jaffle shop](https://github.com/dbt-labs/jaffle-sl-template/blob/main/models/marts/_models.yml) example. Note that the [`models` key](/reference/model-properties) in the time spine configuration must be placed in your `models/` directory.

Now, break down the configuration above. It's pointing to a model called `time_spine_daily`. It sets the time spine configurations under the `time_spine` key. The `standard_granularity_column` is the lowest grain of the table, in this case, it's hourly. It needs to reference a column defined under the columns key, in this case, `date_hour`. Use the `standard_granularity_column` as the join key for the time spine table when joining tables in MetricFlow. Here, the granularity of the `standard_granularity_column` is set at the column level, in this case, `hour`.

Expand Down
7 changes: 5 additions & 2 deletions website/docs/docs/cloud/about-cloud-develop-defer.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,10 @@ The dbt Cloud CLI offers additional flexibility by letting you choose the source
<File name="dbt_cloud.yml">

```yml
defer-env-id: '123456'
context:
active-host: ...
active-project: ...
defer-env-id: '123456'
```
</File>
Expand All @@ -60,7 +63,7 @@ defer-env-id: '123456'
<File name="dbt_project.yml">
```yml
dbt_cloud:
dbt-cloud:
defer-env-id: '123456'
```
Expand Down
Loading

0 comments on commit aba8b9f

Please sign in to comment.