diff --git a/website/docs/docs/build/conversion-metrics.md b/website/docs/docs/build/conversion-metrics.md
index 2238655fbe0..39b3d969b27 100644
--- a/website/docs/docs/build/conversion-metrics.md
+++ b/website/docs/docs/build/conversion-metrics.md
@@ -32,16 +32,20 @@ The specification for conversion metrics is as follows:
| `constant_properties` | List of constant properties. | List | Optional |
| `base_property` | The property from the base semantic model that you want to hold constant. | Entity or Dimension | Optional |
| `conversion_property` | The property from the conversion semantic model that you want to hold constant. | Entity or Dimension | Optional |
+| `fill_nulls_with` | Set the value in your metric definition instead of null (such as zero). | String | Optional |
+
+Refer to [additional settings](#additional-settings) to learn how to customize conversion metrics with settings for null values, calculation type, and constant properties.
The following code example displays the complete specification for conversion metrics and details how they're applied:
```yaml
metrics:
- name: The metric name # Required
- description: the metric description # Optional
+ description: The metric description # Optional
type: conversion # Required
label: # Required
type_params: # Required
+ fills_nulls_with: Set the value in your metric definition instead of null (such as zero) # Optional
conversion_type_params: # Required
entity: ENTITY # Required
calculation: CALCULATION_TYPE # Optional. default: conversion_rate. options: conversions(buys) or conversion_rate (buys/visits), and more to come.
@@ -89,6 +93,7 @@ Next, define a conversion metric as follows:
type: conversion
label: Visit to Buy Conversion Rate (7-day window)
type_params:
+ fills_nulls_with: 0
conversion_type_params:
base_measure: visits
conversion_measure: sellers
@@ -117,7 +122,7 @@ inner join (
select *, uuid_string() as uuid from buys -- Adds a uuid column to uniquely identify the different rows
) b
on
-v.user_id = b.user_id and v.ds <= b.ds and v.ds > b.ds - interval '7 day'
+v.user_id = b.user_id and v.ds <= b.ds and v.ds > b.ds - interval '7 days'
```
The dataset returns the following (note that there are two potential conversion events for the first visit):
@@ -147,7 +152,6 @@ inner join (
) b
on
v.user_id = b.user_id and v.ds <= b.ds and v.ds > b.ds - interval '7 day'
-
```
The dataset returns the following:
@@ -249,7 +253,7 @@ Use the following additional settings to customize your conversion metrics:
To return zero in the final data set, you can set the value of a null conversion event to zero instead of null. You can add the `fill_nulls_with` parameter to your conversion metric definition like this:
```yaml
-- name: vist_to_buy_conversion_rate_7_day_window
+- name: visit_to_buy_conversion_rate_7_day_window
description: "Conversion rate from viewing a page to making a purchase"
type: conversion
label: Visit to Seller Conversion Rate (7 day window)
@@ -345,7 +349,6 @@ on
and v.ds <= buy_source.ds
and v.ds > buy_source.ds - interval '7 day'
and buy_source.product_id = v.product_id --Joining on the constant property product_id
-
```
diff --git a/website/docs/docs/build/cumulative-metrics.md b/website/docs/docs/build/cumulative-metrics.md
index 94ed762d1f0..ec962969c9e 100644
--- a/website/docs/docs/build/cumulative-metrics.md
+++ b/website/docs/docs/build/cumulative-metrics.md
@@ -20,6 +20,7 @@ This metric is common for calculating things like weekly active users, or month-
| `measure` | The measure you are referencing. | Required |
| `window` | The accumulation window, such as 1 month, 7 days, 1 year. This can't be used with `grain_to_date`. | Optional |
| `grain_to_date` | Sets the accumulation grain, such as month will accumulate data for one month. Then restart at the beginning of the next. This can't be used with `window`. | Optional |
+| `fill_nulls_with` | Set the value in your metric definition instead of null (such as zero).| Optional |
The following displays the complete specification for cumulative metrics, along with an example:
@@ -30,6 +31,7 @@ metrics:
type: cumulative # Required
label: The value that will be displayed in downstream tools # Required
type_params: # Required
+ fill_nulls_with: Set the value in your metric definition instead of null (such as zero) # Optional
measure: The measure you are referencing # Required
window: The accumulation window, such as 1 month, 7 days, 1 year. # Optional. Cannot be used with grain_to_date
grain_to_date: Sets the accumulation grain, such as month will accumulate data for one month, then restart at the beginning of the next. # Optional. Cannot be used with window
@@ -37,6 +39,7 @@ metrics:
```
## Limitations
+
Cumulative metrics are currently under active development and have the following limitations:
- You are required to use [`metric_time` dimension](/docs/build/dimensions#time) when querying cumulative metrics. If you don't use `metric_time` in the query, the cumulative metric will return incorrect results because it won't perform the time spine join. This means you cannot reference time dimensions other than the `metric_time` in the query.
@@ -59,12 +62,14 @@ metrics:
description: The cumulative value of all orders
type: cumulative
type_params:
+ fill_nulls_with: 0
measure: order_total
- name: cumulative_order_total_l1m
label: Cumulative Order total (L1M)
description: Trailing 1 month cumulative order amount
type: cumulative
type_params:
+ fills_nulls_with: 0
measure: order_total
window: 1 month
- name: cumulative_order_total_mtd
@@ -72,6 +77,7 @@ metrics:
description: The month to date value of all orders
type: cumulative
type_params:
+ fills_nulls_with: 0
measure: order_total
grain_to_date: month
```
@@ -201,16 +207,16 @@ The current method connects the metric table to a timespine table using the prim
``` sql
select
- count(distinct distinct_users) as weekly_active_users
- , metric_time
+ count(distinct distinct_users) as weekly_active_users,
+ metric_time
from (
select
- subq_3.distinct_users as distinct_users
- , subq_3.metric_time as metric_time
+ subq_3.distinct_users as distinct_users,
+ subq_3.metric_time as metric_time
from (
select
- subq_2.distinct_users as distinct_users
- , subq_1.metric_time as metric_time
+ subq_2.distinct_users as distinct_users,
+ subq_1.metric_time as metric_time
from (
select
metric_time
@@ -223,8 +229,8 @@ from (
) subq_1
inner join (
select
- distinct_users as distinct_users
- , date_trunc('day', ds) as metric_time
+ distinct_users as distinct_users,
+ date_trunc('day', ds) as metric_time
from demo_schema.transactions transactions_src_426
where (
(date_trunc('day', ds)) >= cast('1999-12-26' as timestamp)
@@ -241,6 +247,7 @@ from (
) subq_3
)
group by
- metric_time
-limit 100
+ metric_time,
+limit 100;
+
```
diff --git a/website/docs/docs/build/derived-metrics.md b/website/docs/docs/build/derived-metrics.md
index 7f01736d2b3..35adb12cb1a 100644
--- a/website/docs/docs/build/derived-metrics.md
+++ b/website/docs/docs/build/derived-metrics.md
@@ -21,6 +21,7 @@ In MetricFlow, derived metrics are metrics created by defining an expression usi
| `metrics` | The list of metrics used in the derived metrics. | Required |
| `alias` | Optional alias for the metric that you can use in the expr. | Optional |
| `filter` | Optional filter to apply to the metric. | Optional |
+| `fill_nulls_with` | Set the value in your metric definition instead of null (such as zero). | Optional |
| `offset_window` | Set the period for the offset window, such as 1 month. This will return the value of the metric one month from the metric time. | Optional |
The following displays the complete specification for derived metrics, along with an example.
@@ -32,6 +33,7 @@ metrics:
type: derived # Required
label: The value that will be displayed in downstream tools #Required
type_params: # Required
+ fill_nulls_with: Set the value in your metric definition instead of null (such as zero) # Optional
expr: the derived expression # Required
metrics: # The list of metrics used in the derived metrics # Required
- name: the name of the metrics. must reference a metric you have already defined # Required
@@ -49,6 +51,7 @@ metrics:
type: derived
label: Order Gross Profit
type_params:
+ fill_nulls_with: 0
expr: revenue - cost
metrics:
- name: order_total
@@ -60,6 +63,7 @@ metrics:
description: "The gross profit for each food order."
type: derived
type_params:
+ fill_nulls_with: 0
expr: revenue - cost
metrics:
- name: order_total
@@ -96,6 +100,7 @@ The following example displays how you can calculate monthly revenue growth usin
description: Percentage of customers that are active now and those active 1 month ago
label: customer_retention
type_params:
+ fill_nulls_with: 0
expr: (active_customers/ active_customers_prev_month)
metrics:
- name: active_customers
@@ -115,6 +120,7 @@ You can query any granularity and offset window combination. The following examp
type: derived
label: d7 Bookings Change
type_params:
+ fill_nulls_with: 0
expr: bookings - bookings_7_days_ago
metrics:
- name: bookings
@@ -126,10 +132,10 @@ You can query any granularity and offset window combination. The following examp
When you run the query `dbt sl query --metrics d7_booking_change --group-by metric_time__month` for the metric, here's how it's calculated. For dbt Core, you can use the `mf query` prefix.
-1. We retrieve the raw, unaggregated dataset with the specified measures and dimensions at the smallest level of detail, which is currently 'day'.
-2. Then, we perform an offset join on the daily dataset, followed by performing a date trunc and aggregation to the requested granularity.
+1. Retrieve the raw, unaggregated dataset with the specified measures and dimensions at the smallest level of detail, which is currently 'day'.
+2. Then, perform an offset join on the daily dataset, followed by performing a date trunc and aggregation to the requested granularity.
For example, to calculate `d7_booking_change` for July 2017:
- - First, we sum up all the booking values for each day in July to calculate the bookings metric.
+ - First, sum up all the booking values for each day in July to calculate the bookings metric.
- The following table displays the range of days that make up this monthly aggregation.
| | Orders | Metric_time |
@@ -139,7 +145,7 @@ When you run the query `dbt sl query --metrics d7_booking_change --group-by met
| | 78 | 2017-07-01 |
| Total | 7438 | 2017-07-01 |
-3. Next, we calculate July's bookings with a 7-day offset. The following table displays the range of days that make up this monthly aggregation. Note that the month begins 7 days later (offset by 7 days) on 2017-07-24.
+3. Calculate July's bookings with a 7-day offset. The following table displays the range of days that make up this monthly aggregation. Note that the month begins 7 days later (offset by 7 days) on 2017-07-24.
| | Orders | Metric_time |
| - | ---- | -------- |
@@ -148,7 +154,7 @@ When you run the query `dbt sl query --metrics d7_booking_change --group-by met
| | 83 | 2017-06-24 |
| Total | 7252 | 2017-07-01 |
-4. Lastly, we calculate the derived metric and return the final result set:
+4. Lastly, calculate the derived metric and return the final result set:
```bash
bookings - bookings_7_days_ago would be compile as 7438 - 7252 = 186.
diff --git a/website/docs/docs/build/metrics-overview.md b/website/docs/docs/build/metrics-overview.md
index ea602d0953f..f6844c60498 100644
--- a/website/docs/docs/build/metrics-overview.md
+++ b/website/docs/docs/build/metrics-overview.md
@@ -9,7 +9,7 @@ pagination_next: "docs/build/cumulative"
Once you've created your semantic models, it's time to start adding metrics! Metrics can be defined in the same YAML files as your semantic models, or split into separate YAML files into any other subdirectories (provided that these subdirectories are also within the same dbt project repo)
-The keys for metrics definitions are:
+The keys for metrics definitions are:
| Parameter | Description | Type |
| --------- | ----------- | ---- |
@@ -22,7 +22,6 @@ The keys for metrics definitions are:
| `filter` | You can optionally add a filter string to any metric type, applying filters to dimensions, entities, or time dimensions during metric computation. Consider it as your WHERE clause. | Optional |
| `meta` | Additional metadata you want to add to your metric. | Optional |
-
Here's a complete example of the metrics spec configuration:
```yaml
@@ -39,14 +38,7 @@ metrics:
null
```
-This page explains the different supported metric types you can add to your dbt project.
-
+This page explains the different supported metric types you can add to your dbt project.
### Conversion metrics
@@ -55,10 +47,11 @@ This page explains the different supported metric types you can add to your dbt
```yaml
metrics:
- name: The metric name # Required
- description: the metric description # Optional
+ description: The metric description # Optional
type: conversion # Required
label: # Required
type_params: # Required
+ fills_nulls_with: Set the value in your metric definition instead of null (such as zero) # Optional
conversion_type_params: # Required
entity: ENTITY # Required
calculation: CALCULATION_TYPE # Optional. default: conversion_rate. options: conversions(buys) or conversion_rate (buys/visits), and more to come.
@@ -82,9 +75,10 @@ metrics:
- support@getdbt.com
type: cumulative
type_params:
+ fills_nulls_with: 0
measures:
- distinct_users
- #Omitting window will accumulate the measure over all time
+ # Omitting window will accumulate the measure over all time
window: 7 days
```
@@ -100,6 +94,7 @@ metrics:
type: derived
label: Order Gross Profit
type_params:
+ fills_nulls_with: 0
expr: revenue - cost
metrics:
- name: order_total
@@ -139,6 +134,7 @@ metrics:
# Define the metrics from the semantic manifest as numerator or denominator
type: ratio
type_params:
+ fills_nulls_with: 0
numerator: cancellations
denominator: transaction_amount
filter: | # add optional constraint string. This applies to both the numerator and denominator
@@ -157,6 +153,7 @@ metrics:
filter: | # add optional constraint string. This applies to both the numerator and denominator
{{ Dimension('customer__country') }} = 'MX'
```
+
### Simple metrics
[Simple metrics](/docs/build/simple) point directly to a measure. You may think of it as a function that takes only one measure as the input.
@@ -171,6 +168,7 @@ metrics:
- name: cancellations
type: simple
type_params:
+ fills_nulls_with: 0
measure: cancellations_usd # Specify the measure you are creating a proxy for.
filter: |
{{ Dimension('order__value')}} > 100 and {{Dimension('user__acquisition')}}
@@ -187,6 +185,7 @@ filter: |
filter: |
{{ TimeDimension('time_dimension', 'granularity') }}
```
+
### Further configuration
You can set more metadata for your metrics, which can be used by other tools later on. The way this metadata is used will vary based on the specific integration partner
diff --git a/website/docs/docs/build/ratio-metrics.md b/website/docs/docs/build/ratio-metrics.md
index 97efe0f55bf..5de4128c1f5 100644
--- a/website/docs/docs/build/ratio-metrics.md
+++ b/website/docs/docs/build/ratio-metrics.md
@@ -21,6 +21,7 @@ Ratio allows you to create a ratio between two metrics. You simply specify a num
| `denominator` | The name of the metric used for the denominator, or structure of properties. | Required |
| `filter` | Optional filter for the numerator or denominator. | Optional |
| `alias` | Optional alias for the numerator or denominator. | Optional |
+| `fill_nulls_with` | Set the value in your metric definition instead of null (such as zero). | Optional |
The following displays the complete specification for ratio metrics, along with an example.
@@ -31,6 +32,7 @@ metrics:
type: ratio # Required
label: The value that will be displayed in downstream tools #Required
type_params: # Required
+ fill_nulls_with: Set value instead of null (such as zero) # Optional
numerator: The name of the metric used for the numerator, or structure of properties # Required
name: Name of metric used for the numerator # Required
filter: Filter for the numerator # Optional
@@ -50,10 +52,11 @@ metrics:
label: Food Order Ratio
type: ratio
type_params:
+ fill_nulls_with: 0
numerator: food_orders
denominator: orders
-
```
+
## Ratio metrics using different semantic models
The system will simplify and turn the numerator and denominator in a ratio metric from different semantic models by computing their values in sub-queries. It will then join the result set based on common dimensions to calculate the final ratio. Here's an example of the SQL generated for such a ratio metric.
@@ -61,16 +64,16 @@ The system will simplify and turn the numerator and denominator in a ratio metri
```sql
select
- subq_15577.metric_time as metric_time
- , cast(subq_15577.mql_queries_created_test as double) / cast(nullif(subq_15582.distinct_query_users, 0) as double) as mql_queries_per_active_user
+ subq_15577.metric_time as metric_time,
+ cast(subq_15577.mql_queries_created_test as double) / cast(nullif(subq_15582.distinct_query_users, 0) as double) as mql_queries_per_active_user
from (
select
- metric_time
- , sum(mql_queries_created_test) as mql_queries_created_test
+ metric_time,
+ sum(mql_queries_created_test) as mql_queries_created_test
from (
select
- cast(query_created_at as date) as metric_time
- , case when query_status in ('PENDING','MODE') then 1 else 0 end as mql_queries_created_test
+ cast(query_created_at as date) as metric_time,
+ case when query_status in ('PENDING','MODE') then 1 else 0 end as mql_queries_created_test
from prod_dbt.mql_query_base mql_queries_test_src_2552
) subq_15576
group by
@@ -78,12 +81,12 @@ from (
) subq_15577
inner join (
select
- metric_time
- , count(distinct distinct_query_users) as distinct_query_users
+ metric_time,
+ count(distinct distinct_query_users) as distinct_query_users
from (
select
- cast(query_created_at as date) as metric_time
- , case when query_status in ('MODE','PENDING') then email else null end as distinct_query_users
+ cast(query_created_at as date) as metric_time,
+ case when query_status in ('MODE','PENDING') then email else null end as distinct_query_users
from prod_dbt.mql_query_base mql_queries_src_2585
) subq_15581
group by
@@ -115,6 +118,7 @@ metrics:
- support@getdbt.com
type: ratio
type_params:
+ fill_nulls_with: 0
numerator:
name: distinct_purchasers
filter: |
@@ -124,4 +128,7 @@ metrics:
name: distinct_purchasers
```
-Note the `filter` and `alias` parameters for the metric referenced in the numerator. Use the `filter` parameter to apply a filter to the metric it's attached to. The `alias` parameter is used to avoid naming conflicts in the rendered SQL queries when the same metric is used with different filters. If there are no naming conflicts, the `alias` parameter can be left out.
+Note the `filter` and `alias` parameters for the metric referenced in the numerator.
+- Use the `filter` parameter to apply a filter to the metric it's attached to.
+- The `alias` parameter is used to avoid naming conflicts in the rendered SQL queries when the same metric is used with different filters.
+- If there are no naming conflicts, the `alias` parameter can be left out.
diff --git a/website/docs/docs/build/simple.md b/website/docs/docs/build/simple.md
index 1803e952a69..fafb770dd04 100644
--- a/website/docs/docs/build/simple.md
+++ b/website/docs/docs/build/simple.md
@@ -19,6 +19,7 @@ Simple metrics are metrics that directly reference a single measure, without any
| `label` | The value that will be displayed in downstream tools. | Required |
| `type_params` | The type parameters of the metric. | Required |
| `measure` | The measure you're referencing. | Required |
+| `fill_nulls_with` | Set the value in your metric definition instead of null (such as zero). | Optional |
The following displays the complete specification for simple metrics, along with an example.
@@ -28,9 +29,10 @@ metrics:
- name: The metric name # Required
description: the metric description # Optional
type: simple # Required
- label: The value that will be displayed in downstream tools #Required
+ label: The value that will be displayed in downstream tools # Required
type_params: # Required
measure: The measure you're referencing # Required
+ fill_nulls_with: Set value instead of null (such as zero) # Optional
```
@@ -50,13 +52,16 @@ If you've already defined the measure using the `create_metric: true` parameter,
type: simple # Pointers to a measure you created in a semantic model
label: Count of customers
type_params:
- measure: customers # The measure youre creating a proxy of.
+ fills_nulls_with: 0
+ measure: customers # The measure you're creating a proxy of.
- name: large_orders
description: "Order with order values over 20."
type: SIMPLE
label: Large Orders
type_params:
+ fill_nulls_with: 0
measure: orders
filter: | # For any metric you can optionally include a filter on dimension values
{{Dimension('customer__order_total_dim')}} >= 20
```
+
diff --git a/website/docs/docs/dbt-cloud-apis/sl-graphql.md b/website/docs/docs/dbt-cloud-apis/sl-graphql.md
index e1caf6c70b8..f26a19a1930 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-graphql.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-graphql.md
@@ -219,6 +219,8 @@ DimensionType = [CATEGORICAL, TIME]
### Querying
+When querying for data, _either_ a `groupBy` _or_ a `metrics` selection is required.
+
**Create Dimension Values query**
```graphql
@@ -443,22 +445,35 @@ mutation {
}
```
+**Query a categorical dimension on its own**
+
+```graphql
+mutation {
+ createQuery(
+ environmentId: 123456
+ groupBy: [{name: "customer__customer_type"}]
+ ) {
+ queryId
+ }
+}
+```
+
**Query with a where filter**
The `where` filter takes a list argument (or a string for a single input). Depending on the object you are filtering, there are a couple of parameters:
- - `Dimension()` — Used for any categorical or time dimensions. If used for a time dimension, granularity is required. For example, `Dimension('metric_time').grain('week')` or `Dimension('customer__country')`.
+ - `Dimension()` — Used for any categorical or time dimensions. For example, `Dimension('metric_time').grain('week')` or `Dimension('customer__country')`.
- `Entity()` — Used for entities like primary and foreign keys, such as `Entity('order_id')`.
-Note: If you prefer a more strongly typed `where` clause, you can optionally use `TimeDimension()` to separate out categorical dimensions from time ones. The `TimeDimension` input takes the time dimension name and also requires granularity. For example, `TimeDimension('metric_time', 'MONTH')`.
+Note: If you prefer a `where` clause with a more explicit path, you can optionally use `TimeDimension()` to separate categorical dimensions from time ones. The `TimeDimension` input takes the time dimension and optionally the granularity level. `TimeDimension('metric_time', 'month')`.
```graphql
mutation {
createQuery(
environmentId: BigInt!
metrics:[{name: "order_total"}]
- groupBy:[{name: "customer__customer_type"}, {name: "metric_time", grain: MONTH}]
+ groupBy:[{name: "customer__customer_type"}, {name: "metric_time", grain: month}]
where:[{sql: "{{ Dimension('customer__customer_type') }} = 'new'"}, {sql:"{{ Dimension('metric_time').grain('month') }} > '2022-10-01'"}]
) {
queryId
@@ -466,6 +481,55 @@ mutation {
}
```
+For both `TimeDimension()`, the grain is only required in the WHERE filter if the aggregation time dimensions for the measures and metrics associated with the where filter have different grains.
+
+For example, consider this Semantic model and Metric configuration, which contains two metrics that are aggregated across different time grains. This example shows a single semantic model, but the same goes for metrics across more than one semantic model.
+
+```yaml
+semantic_model:
+ name: my_model_source
+
+defaults:
+ agg_time_dimension: created_month
+ measures:
+ - name: measure_0
+ agg: sum
+ - name: measure_1
+ agg: sum
+ agg_time_dimension: order_year
+ dimensions:
+ - name: created_month
+ type: time
+ type_params:
+ time_granularity: month
+ - name: order_year
+ type: time
+ type_params:
+ time_granularity: year
+
+metrics:
+ - name: metric_0
+ description: A metric with a month grain.
+ type: simple
+ type_params:
+ measure: measure_0
+ - name: metric_1
+ description: A metric with a year grain.
+ type: simple
+ type_params:
+ measure: measure_1
+```
+
+Assuming the user is querying `metric_0` and `metric_1` together, a valid filter would be:
+
+ * `"{{ TimeDimension('metric_time', 'year') }} > '2020-01-01'"`
+
+Invalid filters would be:
+
+ * ` "{{ TimeDimension('metric_time') }} > '2020-01-01'"` — metrics in the query are defined based on measures with different grains.
+
+ * `"{{ TimeDimension('metric_time', 'month') }} > '2020-01-01'"` — `metric_1` is not available at a month grain.
+
**Query with Order**
```graphql
diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
index 0927f8acc02..2e928db6af2 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
@@ -92,9 +92,9 @@ select * from {{
-Use this query to fetch dimension values for one or multiple metrics and single dimension.
+Use this query to fetch dimension values for one or multiple metrics and a single dimension.
-Note, `metrics` is a required argument that lists one or multiple metrics in it, and a single dimension.
+Note, `metrics` is a required argument that lists one or multiple metrics, and a single dimension.
```bash
select * from {{
@@ -105,9 +105,9 @@ semantic_layer.dimension_values(metrics=['food_order_amount'], group_by=['custom
-Use this query to fetch queryable granularities for a list of metrics. This API request allows you to only show the time granularities that make sense for the primary time dimension of the metrics (such as `metric_time`), but if you want queryable granularities for other time dimensions, you can use the `dimensions()` call, and find the column queryable_granularities.
+You can use this query to fetch queryable granularities for a list of metrics. This API request allows you to only show the time granularities that make sense for the primary time dimension of the metrics (such as `metric_time`), but if you want queryable granularities for other time dimensions, you can use the `dimensions()` call, and find the column queryable_granularities.
-Note, `metrics` is a required argument that lists one or multiple metrics in it.
+Note, `metrics` is a required argument that lists one or multiple metrics.
```bash
select * from {{
@@ -124,7 +124,7 @@ select * from {{
Use this query to fetch available metrics given dimensions. This command is essentially the opposite of getting dimensions given a list of metrics.
-Note, `group_by` is a required argument that lists one or multiple dimensions in it.
+Note, `group_by` is a required argument that lists one or multiple dimensions.
```bash
select * from {{
@@ -137,7 +137,7 @@ select * from {{
-Use this example query to fetch available granularities for all time dimesensions (the similar queryable granularities API call only returns granularities for the primary time dimensions for metrics). The following call is a derivative of the `dimensions()` call and specifically selects the granularities field.
+You can use this example query to fetch available granularities for all time dimensions (the similar queryable granularities API call only returns granularities for the primary time dimensions for metrics). The following call is a derivative of the `dimensions()` call and specifically selects the granularity field.
```bash
select NAME, QUERYABLE_GRANULARITIES from {{
@@ -179,8 +179,6 @@ To query metric values, here are the following parameters that are available. Yo
|`order` | Order the data returned by a particular field | `order_by=['order_gross_profit']`, use `-` for descending, or full object notation if the object is operated on: `order_by=[Metric('order_gross_profit').descending(True)`] |
| `compile` | If true, returns generated SQL for the data platform but does not execute | `compile=True` |
-
-
## Note on time dimensions and `metric_time`
You will notice that in the list of dimensions for all metrics, there is a dimension called `metric_time`. `Metric_time` is a reserved keyword for the measure-specific aggregation time dimensions. For any time-series metric, the `metric_time` keyword should always be available for use in queries. This is a common dimension across *all* metrics in a semantic graph.
@@ -266,11 +264,62 @@ Where filters in API allow for a filter list or string. We recommend using the f
Where Filters have a few objects that you can use:
-- `Dimension()` - Used for any categorical or time dimensions. If used for a time dimension, granularity is required - `Dimension('metric_time').grain('week')` or `Dimension('customer__country')`
+- `Dimension()` — Used for any categorical or time dimensions. `Dimension('metric_time').grain('week')` or `Dimension('customer__country')`.
+
+- `TimeDimension()` — Used as a more explicit definition for time dimensions, optionally takes in a granularity `TimeDimension('metric_time', 'month')`.
+
+- `Entity()` — Used for entities like primary and foreign keys - `Entity('order_id')`.
+
+
+For `TimeDimension()`, the grain is only required in the `WHERE` filter if the aggregation time dimensions for the measures and metrics associated with the where filter have different grains.
+
+For example, consider this Semantic model and Metric config, which contains two metrics that are aggregated across different time grains. This example shows a single semantic model, but the same goes for metrics across more than one semantic model.
+
+```yaml
+semantic_model:
+ name: my_model_source
+
+defaults:
+ agg_time_dimension: created_month
+ measures:
+ - name: measure_0
+ agg: sum
+ - name: measure_1
+ agg: sum
+ agg_time_dimension: order_year
+ dimensions:
+ - name: created_month
+ type: time
+ type_params:
+ time_granularity: month
+ - name: order_year
+ type: time
+ type_params:
+ time_granularity: year
+
+metrics:
+ - name: metric_0
+ description: A metric with a month grain.
+ type: simple
+ type_params:
+ measure: measure_0
+ - name: metric_1
+ description: A metric with a year grain.
+ type: simple
+ type_params:
+ measure: measure_1
+
+```
+
+Assuming the user is querying `metric_0` and `metric_1` together in a single request, a valid `WHERE` filter would be:
+
+ * `"{{ TimeDimension('metric_time', 'year') }} > '2020-01-01'"`
+
+Invalid filters would be:
-- `Entity()` - Used for entities like primary and foreign keys - `Entity('order_id')`
+ * `"{{ TimeDimension('metric_time') }} > '2020-01-01'"` — metrics in the query are defined based on measures with different grains.
-Note: If you prefer a more explicit path to create the `where` clause, you can optionally use the `TimeDimension` feature. This helps separate out categorical dimensions from time-related ones. The `TimeDimesion` input takes the time dimension name and also requires granularity, like this: `TimeDimension('metric_time', 'MONTH')`.
+ * `"{{ TimeDimension('metric_time', 'month') }} > '2020-01-01'"` — `metric_1` is not available at a month grain.
- Use the following example to query using a `where` filter with the string format:
@@ -295,7 +344,7 @@ where=["{{ Dimension('metric_time').grain('month') }} >= '2017-03-09'", "{{ Dime
### Query with a limit
-Use the following example to query using a `limit` or `order_by` clauses:
+Use the following example to query using a `limit` or `order_by` clause:
```bash
select * from {{
@@ -303,10 +352,11 @@ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
group_by=[Dimension('metric_time')],
limit=10)
}}
-```
+```
+
### Query with Order By Examples
-Order By can take a basic string that's a Dimension, Metric, or Entity and this will default to ascending order
+Order By can take a basic string that's a Dimension, Metric, or Entity, and this will default to ascending order
```bash
select * from {{
@@ -317,7 +367,7 @@ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
}}
```
-For descending order, you can add a `-` sign in front of the object. However, you can only use this short hand notation if you aren't operating on the object or using the full object notation.
+For descending order, you can add a `-` sign in front of the object. However, you can only use this short-hand notation if you aren't operating on the object or using the full object notation.
```bash
select * from {{
@@ -326,8 +376,9 @@ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
limit=10,
order_by=[-'order_gross_profit'])
}}
-```
-If you are ordering by an object that's been operated on (e.g., change granularity), or you are using the full object notation, descending order must look like:
+```
+
+If you are ordering by an object that's been operated on (for example, you changed the granularity of the time dimension), or you are using the full object notation, descending order must look like:
```bash
select * from {{
@@ -366,14 +417,24 @@ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
-- **Why do some dimensions use different syntax, like `metric_time` versus `[Dimension('metric_time')`?**
- When you select a dimension on its own, such as `metric_time` you can use the shorthand method which doesn't need the “Dimension” syntax. However, when you perform operations on the dimension, such as adding granularity, the object syntax `[Dimension('metric_time')` is required.
+
+When you select a dimension on its own, such as `metric_time` you can use the shorthand method which doesn't need the “Dimension” syntax.
+
+However, when you perform operations on the dimension, such as adding granularity, the object syntax `[Dimension('metric_time')` is required.
+
+
+
+
+The double underscore `"__"` syntax indicates a mapping from an entity to a dimension, as well as where the dimension is located. For example, `user__country` means someone is looking at the `country` dimension from the `user` table.
+
+
+
+
+The default output follows the format `{{time_dimension_name}__{granularity_level}}`.
-- **What does the double underscore `"__"` syntax in dimensions mean?**
- The double underscore `"__"` syntax indicates a mapping from an entity to a dimension, as well as where the dimension is located. For example, `user__country` means someone is looking at the `country` dimension from the `user` table.
+So for example, if the `time_dimension_name` is `ds` and the granularity level is yearly, the output is `ds__year`.
-- **What is the default output when adding granularity?**
- The default output follows the format `{time_dimension_name}__{granularity_level}`. So for example, if the time dimension name is `ds` and the granularity level is yearly, the output is `ds__year`.
+
## Related docs
diff --git a/website/docs/reference/global-configs/print-output.md b/website/docs/reference/global-configs/print-output.md
index 112b92b546f..78de635f2dd 100644
--- a/website/docs/reference/global-configs/print-output.md
+++ b/website/docs/reference/global-configs/print-output.md
@@ -8,35 +8,17 @@ sidebar: "Print output"
-By default, dbt includes `print()` messages in standard out (stdout). You can use the `NO_PRINT` config to prevent these messages from showing up in stdout.
-
-
-
-```yaml
-config:
- no_print: true
-```
-
-
+By default, dbt includes `print()` messages in standard out (stdout). You can use the `DBT_NO_PRINT` environment variable to prevent these messages from showing up in stdout.
-By default, dbt includes `print()` messages in standard out (stdout). You can use the `PRINT` config to prevent these messages from showing up in stdout.
-
-
-
-```yaml
-config:
- print: false
-```
-
-
+By default, dbt includes `print()` messages in standard out (stdout). You can use the `DBT_PRINT` environment variable to prevent these messages from showing up in stdout.
:::warning Syntax deprecation
-The original `NO_PRINT` syntax has been deprecated, starting with dbt v1.5. Backward compatibility is supported but will be removed in an as-of-yet-undetermined future release.
+The original `DBT_NO_PRINT` environment variable has been deprecated, starting with dbt v1.5. Backward compatibility is supported but will be removed in an as-of-yet-undetermined future release.
:::
@@ -46,8 +28,6 @@ Supply `--no-print` flag to `dbt run` to suppress `print()` messages from showin
```text
dbt --no-print run
-...
-
```
### Printer width