diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md
index 46788758ee6..2a247263159 100644
--- a/website/docs/docs/build/incremental-models.md
+++ b/website/docs/docs/build/incremental-models.md
@@ -174,7 +174,7 @@ dbt's incremental materialization works differently on different databases. Wher
On warehouses that do not support `merge` statements, a merge is implemented by first using a `delete` statement to delete records in the target table that are to be updated, and then an `insert` statement.
-Transaction management is used to ensure this is executed as a single unit of work.
+Transaction management, a process used in certain data platforms, ensures that a set of actions is treated as a single unit of work (or task). If any part of the unit of work fails, dbt will roll back open transactions and restore the database to a good state.
## What if the columns of my incremental model change?
diff --git a/website/docs/docs/build/metrics.md b/website/docs/docs/build/metrics.md
deleted file mode 100644
index 7afcb41c2e4..00000000000
--- a/website/docs/docs/build/metrics.md
+++ /dev/null
@@ -1,696 +0,0 @@
----
-title: "Metrics"
-id: "metrics"
-description: "When you define metrics in dbt projects, you encode crucial business logic in tested, version-controlled code. The dbt metrics layer helps you standardize metrics within your organization."
-keywords:
- - dbt metrics layer
-tags: [Metrics]
----
-
-import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
-
-
-
-
-
-
-The dbt Semantic Layer has undergone a [significant revamp](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/), improving governance, introducing new APIs, and making it more efficient to define/query metrics. This revamp means the dbt_metrics package and the legacy Semantic Layer, available in dbt v1.5 or lower, are no longer supported and won't receive any code fixes.
-
-**What’s changed?**
-The dbt_metrics package has been [deprecated](https://docs.getdbt.com/blog/deprecating-dbt-metrics) and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), a new framework for defining metrics in dbt. This means dbt_metrics is no longer supported after dbt v1.5 and won't receive any code fixes. We will also remove the dbt_metrics spec and docs when it's fully deprecated.
-
-**Who does this affect?**
-Anyone who uses the dbt_metrics package or is integrated with the legacy Semantic Layer. The new Semantic Layer is available to [Team or Enterprise](https://www.getdbt.com/pricing/) multi-tenant dbt Cloud plans [hosted in North America](/docs/cloud/about-cloud/regions-ip-addresses). You must be on dbt v1.6 or higher to access it. All users can define metrics using MetricFlow. Users on dbt Cloud Developer plans or dbt Core can only use it to define and test metrics locally, but can't dynamically query them with integrated tools.
-
-**What should you do?**
-If you've defined metrics using dbt_metrics or integrated with the legacy Semantic Layer, we **highly** recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use MetricFlow or the new dbt Semantic Layer. To migrate to the new Semantic Layer, refer to the dedicated [migration guide](/guides/sl-migration) for more info.
-
-
-
-
-
-
-
-A metric is an aggregation over a that supports zero or more dimensions. Some examples of metrics include:
-- active users
-- monthly recurring revenue (mrr)
-
-In v1.0, dbt supports metric definitions as a new node type. Like [exposures](exposures), metrics appear as nodes in the directed acyclic graph (DAG) and can be expressed in YAML files. Defining metrics in dbt projects encodes crucial business logic in tested, version-controlled code. Further, you can expose these metrics definitions to downstream tooling, which drives consistency and precision in metric reporting.
-
-### Benefits of defining metrics
-
-**Use metric specifications in downstream tools**
-dbt's compilation context can access metrics via the [`graph.metrics` variable](graph). The [manifest artifact](manifest-json) includes metrics for downstream metadata consumption.
-
-**See and select dependencies**
-As with Exposures, you can see everything that rolls up into a metric (`dbt ls -s +metric:*`), and visualize them in [dbt documentation](documentation). For more information, see "[The `metric:` selection method](node-selection/methods#the-metric-method)."
-
-
-
-## Defining a metric
-
-You can define metrics in `.yml` files nested under a `metrics:` key. Metric names must:
-- contain only letters, numbers, and underscores (no spaces or special characters)
-- begin with a letter
-- contain no more than 250 characters
-
-For a short human-friendly name with title casing, spaces, and special characters, use the `label` property.
-
-### Example definition
-
-
-
-
-
-```yaml
-# models/marts/product/schema.yml
-
-version: 2
-
-models:
- - name: dim_customers
- ...
-
-metrics:
- - name: rolling_new_customers
- label: New Customers
- model: ref('dim_customers')
- [description](description): "The 14 day rolling count of paying customers using the product"
-
- calculation_method: count_distinct
- expression: user_id
-
- timestamp: signup_date
- time_grains: [day, week, month, quarter, year]
-
- dimensions:
- - plan
- - country
-
- window:
- count: 14
- period: day
-
- filters:
- - field: is_paying
- operator: 'is'
- value: 'true'
- - field: lifetime_value
- operator: '>='
- value: '100'
- - field: company_name
- operator: '!='
- value: "'Acme, Inc'"
- - field: signup_date
- operator: '>='
- value: "'2020-01-01'"
-
- # general properties
- [config](resource-properties/config):
- enabled: true | false
- treat_null_values_as_zero: true | false
-
- [meta](resource-configs/meta): {team: Finance}
-```
-
-
-
-
-```yaml
-# models/marts/product/schema.yml
-
-version: 2
-
-models:
- - name: dim_customers
- ...
-
-metrics:
- - name: rolling_new_customers
- label: New Customers
- model: ref('dim_customers')
- description: "The 14 day rolling count of paying customers using the product"
-
- type: count_distinct
- sql: user_id
-
- timestamp: signup_date
- time_grains: [day, week, month, quarter, year, all_time]
-
- dimensions:
- - plan
- - country
-
- filters:
- - field: is_paying
- operator: 'is'
- value: 'true'
- - field: lifetime_value
- operator: '>='
- value: '100'
- - field: company_name
- operator: '!='
- value: "'Acme, Inc'"
- - field: signup_date
- operator: '>='
- value: "'2020-01-01'"
-
- meta: {team: Finance}
-```
-
-
-
-
-:::caution
-
-- You cannot define metrics on [ephemeral models](https://docs.getdbt.com/docs/build/materializations#ephemeral). To define a metric, the materialization must have a representation in the data warehouse.
-
-:::
-
-
-### Available properties
-Metrics can have many declared **properties**, which define aspects of your metric. More information on [properties and configs can be found here](https://docs.getdbt.com/reference/configs-and-properties).
-
-
-
-| Field | Description | Example | Required? |
-|-------------|-------------------------------------------------------------|---------------------------------|-----------|
-| name | A unique identifier for the metric | new_customers | yes |
-| model | The dbt model that powers this metric | dim_customers | yes (no for `derived` metrics)|
-| label | A short for name / label for the metric | New Customers | yes |
-| description | Long form, human-readable description for the metric | The number of customers who.... | no |
-| calculation_method | The method of calculation (aggregation or derived) that is applied to the expression | count_distinct | yes |
-| expression | The expression to aggregate/calculate over | user_id, cast(user_id as int) |yes |
-| timestamp | The time-based component of the metric | signup_date | no yes |
-| time_grains | One or more "grains" at which the metric can be evaluated. For more information, see the "Custom Calendar" section. | [day, week, month, quarter, year] | no yes |
-| dimensions | A list of dimensions to group or filter the metric by | [plan, country] | no |
-| window | A dictionary for aggregating over a window of time. Used for rolling metrics such as 14 day rolling average. Acceptable periods are: [`day`,`week`,`month`, `year`, `all_time`] | {count: 14, period: day} | no |
-| filters | A list of filters to apply before calculating the metric | See below | no |
-| config | [Optional configurations](https://github.com/dbt-labs/dbt_metrics#accepted-metric-configurations) for calculating this metric | {treat_null_values_as_zero: true} | no |
-| meta | Arbitrary key/value store | {team: Finance} | no |
-
-
-
-
-
-| Field | Description | Example | Required? |
-|-------------|-------------------------------------------------------------|---------------------------------|-----------|
-| name | A unique identifier for the metric | new_customers | yes |
-| model | The dbt model that powers this metric | dim_customers | yes (no for `derived` metrics)|
-| label | A short for name / label for the metric | New Customers |yes |
-| description | Long form, human-readable description for the metric | The number of customers who.... | no |
-| type | The method of calculation (aggregation or derived) that is applied to the expression | count_distinct | yes |
-| sql | The expression to aggregate/calculate over | user_id, cast(user_id as int) | yes |
-| timestamp | The time-based component of the metric | signup_date | yes |
-| time_grains | One or more "grains" at which the metric can be evaluated | [day, week, month, quarter, year, all_time] | yes |
-| dimensions | A list of dimensions to group or filter the metric by | [plan, country] | no |
-| filters | A list of filters to apply before calculating the metric | See below | no |
-| meta | Arbitrary key/value store | {team: Finance} | no |
-
-
-
-
-### Available calculation methods
-
-
-
-The method of calculation (aggregation or derived) that is applied to the expression.
-
-
-
-
-The type of calculation (aggregation or expression) that is applied to the sql property.
-
-
-
-| Metric Calculation Method | Description |
-|----------------|----------------------------------------------------------------------------|
-| count | This metric type will apply the `count` aggregation to the specified field |
-| count_distinct | This metric type will apply the `count` aggregation to the specified field, with an additional distinct statement inside the aggregation |
-| sum | This metric type will apply the `sum` aggregation to the specified field |
-| average | This metric type will apply the `average` aggregation to the specified field |
-| min | This metric type will apply the `min` aggregation to the specified field |
-| max | This metric type will apply the `max` aggregation to the specified field |
-| median | This metric type will apply the `median` aggregation to the specified field, or an alternative `percentile_cont` aggregation if `median` is not available |
-|derived expression | This metric type is defined as any _non-aggregating_ calculation of 1 or more metrics |
-
-
-
-### Derived Metrics
-In v1.2, support was added for `derived` metrics (previously named `expression`), which are defined as non-aggregating calculations of 1 or more metrics. An example of this would be `{{metric('total_revenue')}} / {{metric('count_of_customers')}}`.
-
- By defining these metrics, you are able to create metrics like:
-- ratios
-- subtractions
-- any arbitrary calculation
-
-As long as the two (or more) base metrics (metrics that comprise the `derived` metric) share the specified `time_grains` and `dimensions`, those attributes can be used in any downstream metrics macro.
-
-An example definition of an `derived` metric is:
-
-
-
-
-```yaml
-# models/marts/product/schema.yml
-version: 2
-
-models:
- - name: dim_customers
- ...
-
-metrics:
- - name: average_revenue_per_customer
- label: Average Revenue Per Customer
- description: "The average revenue received per customer"
-
- calculation_method: derived
- expression: "{{metric('total_revenue')}} / {{metric('count_of_customers')}}"
-
- timestamp: order_date
- time_grains: [day, week, month, quarter, year, all_time]
- dimensions:
- - had_discount
- - order_country
-
-```
-
-
-
-
-
-### Expression Metrics
-In v1.2, support was added for `expression` metrics, which are defined as non-aggregating calculations of 1 or more metrics. By defining these metrics, you are able to create metrics like:
-- ratios
-- subtractions
-- any arbitrary calculation
-
-As long as the two+ base metrics (the metrics that comprise the `expression` metric) share the specified `time_grains` and `dimensions`, those attributes can be used in any downstream metrics macro.
-
-An example definition of an `expression` metric is:
-
-
-
-
-```yaml
-# models/marts/product/schema.yml
-version: 2
-
-models:
- - name: dim_customers
- ...
-
-metrics:
- - name: average_revenue_per_customer
- label: Average Revenue Per Customer
- description: "The average revenue received per customer"
-
- type: expression
- sql: "{{metric('total_revenue')}} / {{metric('count_of_customers')}}"
-
- timestamp: order_date
- time_grains: [day, week, month, quarter, year, all_time]
- dimensions:
- - had_discount
- - order_country
-
-```
-
-
-### Filters
-Filters should be defined as a list of dictionaries that define predicates for the metric. Filters are combined using AND clauses. For more control, users can (and should) include the complex logic in the model powering the metric.
-
-All three properties (`field`, `operator`, `value`) are required for each defined filter.
-
-Note that `value` must be defined as a string in YAML, because it will be compiled into queries as part of a string. If your filter's value needs to be surrounded by quotes inside the query (e.g. text or dates), use `"'nested'"` quotes:
-
-```yml
- filters:
- - field: is_paying
- operator: 'is'
- value: 'true'
- - field: lifetime_value
- operator: '>='
- value: '100'
- - field: company_name
- operator: '!='
- value: "'Acme, Inc'"
- - field: signup_date
- operator: '>='
- value: "'2020-01-01'"
-```
-
-### Calendar
-The dbt_metrics package contains a [basic calendar table](https://github.com/dbt-labs/dbt_metrics/blob/main/models/dbt_metrics_default_calendar.sql) that is created as part of your `dbt run`. It contains dates between 2010-01-01 and 2029-12-31.
-
-If you want to use a custom calendar, you can replace the default with any table which meets the following requirements:
-- Contains a `date_day` column.
-- Contains the following columns: `date_week`, `date_month`, `date_quarter`, `date_year`, or equivalents.
-- Additional date columns need to be prefixed with `date_`, e.g. `date_4_5_4_month` for a 4-5-4 retail calendar date set. Dimensions can have any name (see following section).
-
-To do this, set the value of the `dbt_metrics_calendar_model` variable in your `dbt_project.yml` file:
-```yaml
-#dbt_project.yml
-config-version: 2
-[...]
-vars:
- dbt_metrics_calendar_model: my_custom_calendar
-```
-
-#### Dimensions from calendar tables
-You may want to aggregate metrics by a dimension in your custom calendar table, for example is_weekend. You can include this within the list of dimensions in the macro call without it needing to be defined in the metric definition.
-
-To do so, set a list variable at the project level called custom_calendar_dimension_list, as shown in the example below.
-
-```yaml
-#dbt_project.yml
-vars:
- custom_calendar_dimension_list: ["is_weekend"]
-```
-
-
-
-### Configuration
-
-Metric nodes now accept `config` dictionaries like other dbt resources. Specify Metric configs in the metric yml itself, or for groups of metrics in the `dbt_project.yml` file.
-
-
-
-
-
-
-
-
-```yml
-version: 2
-metrics:
- - name: config_metric
- label: Example Metric with Config
- model: ref(‘my_model’)
- calculation_method: count
- timestamp: date_field
- time_grains: [day, week, month]
- config:
- enabled: true
-```
-
-
-
-
-
-
-
-
-```yml
-metrics:
- your_project_name:
- +enabled: true
-```
-
-
-
-
-
-
-
-
-#### Accepted Metric Configurations
-
-The following is the list of currently accepted metric configs:
-
-| Config | Type | Accepted Values | Default Value | Description |
-|--------|------|-----------------|---------------|-------------|
-| `enabled` | boolean | True/False | True | Enables or disables a metric node. When disabled, dbt will not consider it as part of your project. |
-| `treat_null_values_as_zero` | boolean | True/False | True | Controls the `coalesce` behavior for metrics. By default, when there are no observations for a metric, the output of the metric as well as [Period over Period](#secondary-calculations) secondary calculations will include a `coalesce({{ field }}, 0)` to return 0's rather than nulls. Setting this config to False instead returns `NULL` values. |
-
-
-
-## Querying Your Metric
-
-:::caution dbt_metrics is no longer supported
-The dbt_metrics package has been deprecated and replaced with [MetricFlow](/docs/build/about-metricflow?version=1.6), a new way framework for defining metrics in dbt. This means dbt_metrics is no longer supported after dbt v1.5 and won't receive any code fixes.
-:::
-
-You can dynamically query metrics directly in dbt and verify them before running a job in the deployment environment. To query your defined metric, you must have the [dbt_metrics package](https://github.com/dbt-labs/dbt_metrics) installed. Information on how to [install packages can be found here](https://docs.getdbt.com/docs/build/packages#how-do-i-add-a-package-to-my-project).
-
-Use the following [metrics package](https://hub.getdbt.com/dbt-labs/metrics/latest/) installation code in your packages.yml file and run `dbt deps` to install the metrics package:
-
-
-
-```yml
-packages:
- - package: dbt-labs/metrics
- version: [">=1.3.0", "<1.4.0"]
-```
-
-
-
-
-
-```yml
-packages:
- - package: dbt-labs/metrics
- version: [">=0.3.0", "<0.4.0"]
-```
-
-
-
-Once the package has been installed with `dbt deps`, make sure to run the `dbt_metrics_default_calendar` model as this is required for macros used to query metrics. More information on this, and additional calendar functionality, can be found in the [project README](https://github.com/dbt-labs/dbt_metrics#calendar).
-
-### Querying metrics with `metrics.calculate`
-Use the `metrics.calculate` macro along with defined metrics to generate a SQL statement that runs the metric aggregation to return the correct metric dataset. Example below:
-
-
-
-```sql
-select *
-from {{ metrics.calculate(
- metric('new_customers'),
- grain='week',
- dimensions=['plan', 'country']
-) }}
-```
-
-
-
-### Supported inputs
-The example above doesn't display all the potential inputs you can provide to the macro.
-
-You may find some pieces of functionality, like secondary calculations, complicated to use. We recommend reviewing the [package README](https://github.com/dbt-labs/dbt_metrics) for more in-depth information about each of the inputs that are not covered in the table below.
-
-
-| Input | Example | Description | Required |
-| ----------- | ----------- | ----------- | -----------|
-| metric_list | `metric('some_metric)'`,
[`metric('some_metric)'`,
`metric('some_other_metric)'`]
| The metric(s) to be queried by the macro. If multiple metrics required, provide in list format. | Required |
-| grain | `'day'`, `'week'`,
`'month'`, `'quarter'`,
`'year'`
| The time grain that the metric will be aggregated to in the returned dataset | Optional |
-| dimensions | [`'plan'`,
`'country'`] | The dimensions you want the metric to be aggregated by in the returned dataset | Optional |
-| secondary_calculations | [`metrics.period_over_period( comparison_strategy="ratio", interval=1, alias="pop_1wk")`] | Performs the specified secondary calculation on the metric results. Examples include period over period calculations, rolling calculations, and period to date calculations. | Optional |
-| start_date | `'2022-01-01'` | Limits the date range of data used in the metric calculation by not querying data before this date | Optional |
-| end_date | `'2022-12-31'` | Limits the date range of data used in the metric calculation by not querying data after this date | Optional |
-| where | `plan='paying_customer'` | A sql statement, or series of sql statements, that alter the **final** CTE in the generated sql. Most often used to limit the data to specific values of dimensions provided | Optional |
-
-### Secondary Calculations
-Secondary calculations are window functions you can add to the metric calculation and perform on the primary metric or metrics.
-
-You can use them to compare values to an earlier period, calculate year-to-date sums, and return rolling averages. You can add custom secondary calculations into dbt projects - for more information on this, reference the [package README](https://github.com/dbt-labs/dbt_metrics#secondary-calculations).
-
-The supported Secondary Calculations are:
-
-#### Period over Period:
-
-The period over period secondary calculation performs a calculation against the metric(s) in question by either determining the difference or the ratio between two points in time. The input variable, which looks at the grain selected in the macro, determines the other point.
-
-| Input | Example | Description | Required |
-| -------------------------- | ----------- | ----------- | -----------|
-| `comparison_strategy` | `ratio` or `difference` | How to calculate the delta between the two periods | Yes |
-| `interval` | 1 | Integer - the number of time grains to look back | Yes |
-| `alias` | `week_over_week` | The column alias for the resulting calculation | No |
-| `metric_list` | `base_sum_metric` | List of metrics that the secondary calculation should be applied to. Default is all metrics selected | No |
-
-#### Period to Date:
-
-The period to date secondary calculation performs an aggregation on a defined period of time that is equal to or higher than the grain selected. For example, when you want to display a month_to_date value alongside your weekly grained metric.
-
-| Input | Example | Description | Required |
-| -------------------------- | ----------- | ----------- | -----------|
-| `aggregate` | `max`, `average` | The aggregation to use in the window function. Options vary based on the primary aggregation and are enforced in [validate_aggregate_coherence()](https://github.com/dbt-labs/dbt_metrics/blob/main/macros/validation/validate_aggregate_coherence.sql). | Yes |
-| `period` | `"day"`, `"week"` | The time grain to aggregate to. One of [`"day"`, `"week"`, `"month"`, `"quarter"`, `"year"`]. Must be at equal or coarser (higher, more aggregated) granularity than the metric's grain (see [Time Grains](#time-grains) below). In example grain of `month`, the acceptable periods would be `month`, `quarter`, or `year`. | Yes |
-| `alias` | `month_to_date` | The column alias for the resulting calculation | No |
-| `metric_list` | `base_sum_metric` | List of metrics that the secondary calculation should be applied to. Default is all metrics selected | No |
-
-#### Rolling:
-
-
-
-The rolling secondary calculation performs an aggregation on a number of rows in metric dataset. For example, if the user selects the `week` grain and sets a rolling secondary calculation to `4` then the value returned will be a rolling 4 week calculation of whatever aggregation type was selected. If the `interval` input is not provided then the rolling caclulation will be unbounded on all preceding rows.
-
-| Input | Example | Description | Required |
-| -------------------------- | ----------- | ----------- | -----------|
-| `aggregate` | `max`, `average` | The aggregation to use in the window function. Options vary based on the primary aggregation and are enforced in [validate_aggregate_coherence()](https://github.com/dbt-labs/dbt_metrics/blob/main/macros/validation/validate_aggregate_coherence.sql). | Yes |
-| `interval` | 1 | Integer - the number of time grains to look back | No |
-| `alias` | `month_to_date` | The column alias for the resulting calculation | No |
-| `metric_list` | `base_sum_metric` | List of metrics that the secondary calculation should be applied to. Default is all metrics selected | No |
-
-
-
-
-The rolling secondary calculation performs an aggregation on a number of rows in the metric dataset. For example, if the user selects the `week` grain and sets a rolling secondary calculation to `4`, then the value returned will be a rolling 4-week calculation of whatever aggregation type was selected.
-
-| Input | Example | Description | Required |
-| -------------------------- | ----------- | ----------- | -----------|
-| `aggregate` | `max`, `average` | The aggregation to use in the window function. Options vary based on the primary aggregation and are enforced in [validate_aggregate_coherence()](https://github.com/dbt-labs/dbt_metrics/blob/main/macros/validation/validate_aggregate_coherence.sql). | Yes |
-| `interval` | 1 | Integer - the number of time grains to look back | Yes |
-| `alias` | `month_to_date` | The column alias for the resulting calculation | No |
-| `metric_list` | `base_sum_metric` | List of metrics that the secondary calculation should be applied to. Default is all metrics selected | No |
-
-
-
-
-#### Prior:
-The prior secondary calculation returns the value from a specified number of intervals before the row.
-
-| Input | Example | Description | Required |
-| -------------------------- | ----------- | ----------- | -----------|
-| `interval` | 1 | Integer - the number of time grains to look back | Yes |
-| `alias` | `2_weeks_prior` | The column alias for the resulting calculation | No |
-| `metric_list` | `base_sum_metric` | List of metrics that the secondary calculation should be applied to. Default is all metrics selected | No |
-
-
-
-### Developing metrics with `metrics.develop`
-
-
-
-There may be times you want to test what a metric might look like before defining it in your project. In these cases, use the `develop` metric, which allows you to provide metric(s) in a contained yml so you can simulate what a defined metric might look like in your project.
-
-```sql
-{% set my_metric_yml -%}
-{% raw %}
-
-metrics:
- -- The name of the metric does not need to be develop_metric
- - name: develop_metric
- model: ref('fact_orders')
- label: Total Discount ($)
- timestamp: order_date
- time_grains: [day, week, month, quarter, year, all_time]
- calculation_method: average
- expression: discount_total
- dimensions:
- - had_discount
- - order_country
-
-{% endraw %}
-{%- endset %}
-
-select *
-from {{ metrics.develop(
- develop_yml=my_metric_yml,
- metric_list=['develop_metric'],
- grain='month'
- )
- }}
-```
-
-**Important caveat** - The metric list input for the `metrics.develop` macro takes in the metric names themselves, not the `metric('name')` statement that the `calculate` macro uses. Using the example above:
-
-- ✅ `['develop_metric']`
-- ❌ `[metric('develop_metric')]`
-
-
-
-
-
-There may be times you want to test what a metric might look like before defining it in your project. In these cases, the `develop` metric, which allows you to provide a single metric in a contained yml so you can simulate what a defined metric might look like in your project.
-
-
-```sql
-{% set my_metric_yml -%}
-{% raw %}
-
-metrics:
- - name: develop_metric
- model: ref('fact_orders')
- label: Total Discount ($)
- timestamp: order_date
- time_grains: [day, week, month, quarter, year, all_time]
- type: average
- sql: discount_total
- dimensions:
- - had_discount
- - order_country
-
-{% endraw %}
-{%- endset %}
-
-select *
-from {{ metrics.develop(
- develop_yml=my_metric_yml,
- grain='month'
- )
- }}
-```
-
-
-
-#### Multiple/Derived Metrics with `metrics.develop`
-If you have a more complicated use case that you are interested in testing, the develop macro also supports this behavior. The only caveat is that you must include the raw tags for any provided metric yml that contains a derived metric. Example below:
-
-```
-{% set my_metric_yml -%}
-{% raw %}
-
-metrics:
- - name: develop_metric
- model: ref('fact_orders')
- label: Total Discount ($)
- timestamp: order_date
- time_grains: [day, week, month]
- calculation_method: average
- expression: discount_total
- dimensions:
- - had_discount
- - order_country
-
- - name: derived_metric
- label: Total Discount ($)
- timestamp: order_date
- time_grains: [day, week, month]
- calculation_method: derived
- expression: "{{ metric('develop_metric') }} - 1 "
- dimensions:
- - had_discount
- - order_country
-
- - name: some_other_metric_not_using
- label: Total Discount ($)
- timestamp: order_date
- time_grains: [day, week, month]
- calculation_method: derived
- expression: "{{ metric('derived_metric') }} - 1 "
- dimensions:
- - had_discount
- - order_country
-
-{% endraw %}
-{%- endset %}
-
-select *
-from {{ metrics.develop(
- develop_yml=my_metric_yml,
- metric_list=['derived_metric']
- grain='month'
- )
- }}
-```
-
-The above example will return a dataset that contains the metric provided in the metric list (`derived_metric`) and the parent metric (`develop_metric`). It will not contain `some_other_metric_not_using` as it is not designated in the metric list or a parent of the metrics included.
-
-**Important caveat** - You _must_ wrap the `expression` property for `derived` metrics in double quotes to render it. For example, `expression: "{{ metric('develop_metric') }} - 1 "`.
-
-
-
-
-
-
-
diff --git a/website/docs/docs/build/projects.md b/website/docs/docs/build/projects.md
index c5e08177dee..45b623dc550 100644
--- a/website/docs/docs/build/projects.md
+++ b/website/docs/docs/build/projects.md
@@ -19,7 +19,7 @@ At a minimum, all a project needs is the `dbt_project.yml` project configuration
| [docs](/docs/collaborate/documentation) | Docs for your project that you can build. |
| [sources](/docs/build/sources) | A way to name and describe the data loaded into your warehouse by your Extract and Load tools. |
| [exposures](/docs/build/exposures) | A way to define and describe a downstream use of your project. |
-| [metrics](/docs/build/metrics) | A way for you to define metrics for your project. |
+| [metrics](/docs/build/build-metrics-intro) | A way for you to define metrics for your project. |
| [groups](/docs/build/groups) | Groups enable collaborative node organization in restricted collections. |
| [analysis](/docs/build/analyses) | A way to organize analytical SQL queries in your project such as the general ledger from your QuickBooks. |
diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md
index 8d2196696aa..7d459cdd91d 100644
--- a/website/docs/docs/cloud/cloud-cli-installation.md
+++ b/website/docs/docs/cloud/cloud-cli-installation.md
@@ -258,16 +258,17 @@ To use these extensions, such as dbt-power-user, with the dbt Cloud CLI, you can
## FAQs
-
+
-What's the difference between the dbt Cloud CLI and dbt Core?
-The dbt Cloud CLI and dbt Core, an open-source project, are both command line tools that enable you to run dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its features.
+The dbt Cloud CLI and dbt Core, an open-source project, are both command line tools that enable you to run dbt commands.
-
+The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its features.
-
-How do I run both the dbt Cloud CLI and dbt Core?
-For compatibility, both the dbt Cloud CLI and dbt Core are invoked by running dbt
. This can create path conflicts if your operating system selects one over the other based on your $PATH environment variable (settings).
+
+
+
+
+For compatibility, both the dbt Cloud CLI and dbt Core are invoked by running `dbt`. This can create path conflicts if your operating system selects one over the other based on your $PATH environment variable (settings).
If you have dbt Core installed locally, either:
@@ -276,10 +277,11 @@ If you have dbt Core installed locally, either:
3. (Advanced users) Install natively, but modify the $PATH environment variable to correctly point to the dbt Cloud CLI binary to use both dbt Cloud CLI and dbt Core together.
You can always uninstall the dbt Cloud CLI to return to using dbt Core.
-
-
-How to create an alias?
+
+
+
+
To create an alias for the dbt Cloud CLI:
1. Open your shell's profile configuration file. Depending on your shell and system, this could be ~/.bashrc
, ~/.bash_profile
, ~/.zshrc
, or another file.
@@ -297,9 +299,12 @@ As an example, in bash you would run: source ~/.bashrc
This alias will allow you to use the dbt-cloud
command to invoke the dbt Cloud CLI while having dbt Core installed natively.
-
-
-Why am I receiving a Session occupied
error?
+
+
+
+
+
If you've ran a dbt command and receive a Session occupied
error, you can reattach to your existing session with dbt reattach
and then press Control-C
and choose to cancel the invocation.
-
+
+
diff --git a/website/docs/docs/collaborate/govern/model-contracts.md b/website/docs/docs/collaborate/govern/model-contracts.md
index 8e7598f8e3b..e3ea1e8c70c 100644
--- a/website/docs/docs/collaborate/govern/model-contracts.md
+++ b/website/docs/docs/collaborate/govern/model-contracts.md
@@ -91,7 +91,7 @@ When building a model with a defined contract, dbt will do two things differentl
Select the adapter-specific tab for more information on [constraint](/reference/resource-properties/constraints) support across platforms. Constraints fall into three categories based on support and platform enforcement:
- **Supported and enforced** — The model won't build if it violates the constraint.
-- **Supported and not enforced** — The platform supports specifying the type of constraint, but a model can still build even if building the model violates the constraint. This constraint exists for metadata purposes only. This is common for modern cloud data warehouses and less common for legacy databases.
+- **Supported and not enforced** — The platform supports specifying the type of constraint, but a model can still build even if building the model violates the constraint. This constraint exists for metadata purposes only. This approach is more typical in cloud data warehouses than in transactional databases, where strict rule enforcement is more common.
- **Not supported and not enforced** — You can't specify the type of constraint for the platform.
diff --git a/website/docs/docs/dbt-cloud-apis/schema-discovery-job-metric.mdx b/website/docs/docs/dbt-cloud-apis/schema-discovery-job-metric.mdx
deleted file mode 100644
index 3a8a52a19cb..00000000000
--- a/website/docs/docs/dbt-cloud-apis/schema-discovery-job-metric.mdx
+++ /dev/null
@@ -1,58 +0,0 @@
----
-title: "Metric object schema"
-sidebar_label: "Metric"
-id: "discovery-schema-job-metric"
----
-
-import { NodeArgsTable, SchemaTable } from "./schema";
-
-The metric object allows you to query information about [metrics](/docs/build/metrics).
-
-### Arguments
-
-When querying for a `metric`, the following arguments are available.
-
-
-
-Below we show some illustrative example queries and outline the schema (all possible fields you can query) of the metric object.
-
-### Example query
-
-The example query below outputs information about a metric. You can also add any field from the model endpoint (the example simply selects name). This includes schema, database, uniqueId, columns, and more. For details, refer to [Model object schema](/docs/dbt-cloud-apis/discovery-schema-job-model).
-
-
-```graphql
-{
- job(id: 123) {
- metric(uniqueId: "metric.jaffle_shop.new_customers") {
- uniqueId
- name
- packageName
- tags
- label
- runId
- description
- type
- sql
- timestamp
- timeGrains
- dimensions
- meta
- resourceType
- filters {
- field
- operator
- value
- }
- model {
- name
- }
- }
- }
-}
-```
-
-### Fields
-When querying for a `metric`, the following fields are available:
-
-
diff --git a/website/docs/docs/dbt-cloud-apis/schema-discovery-job-metrics.mdx b/website/docs/docs/dbt-cloud-apis/schema-discovery-job-metrics.mdx
deleted file mode 100644
index 174dd5b676a..00000000000
--- a/website/docs/docs/dbt-cloud-apis/schema-discovery-job-metrics.mdx
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title: "Metrics object schema"
-sidebar_label: "Metrics"
-id: "discovery-schema-job-metrics"
----
-
-import { NodeArgsTable, SchemaTable } from "./schema";
-
-The metrics object allows you to query information about [metrics](/docs/build/metrics).
-
-
-### Arguments
-
-When querying for `metrics`, the following arguments are available.
-
-
-
-Below we show some illustrative example queries and outline the schema (all possible fields you can query) of the metrics object.
-
-### Example query
-
-The example query returns information about all metrics for the given job.
-
-```graphql
-{
- job(id: 123) {
- metrics {
- uniqueId
- name
- packageName
- tags
- label
- runId
- description
- type
- sql
- timestamp
- timeGrains
- dimensions
- meta
- resourceType
- filters {
- field
- operator
- value
- }
- model {
- name
- }
- }
- }
-}
-```
-
-### Fields
-The metrics object can access the _same fields_ as the [metric node](/docs/dbt-cloud-apis/discovery-schema-job-metric). The difference is that the metrics object can output a list so instead of querying for fields for one specific metric, you can query for those parameters for all metrics in a run.
-
-When querying for `metrics`, the following fields are available:
-
-
diff --git a/website/docs/docs/dbt-cloud-apis/sl-api-overview.md b/website/docs/docs/dbt-cloud-apis/sl-api-overview.md
index 3ddbf76d152..6644d3e4b8b 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-api-overview.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-api-overview.md
@@ -9,10 +9,10 @@ pagination_next: "docs/dbt-cloud-apis/sl-jdbc"
-import LegacyInfo from '/snippets/_legacy-sl-callout.md';
-
-
+import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
+
+
The rapid growth of different tools in the modern data stack has helped data professionals address the diverse needs of different teams. The downside of this growth is the fragmentation of business logic across teams, tools, and workloads.
@@ -57,5 +57,3 @@ plan="dbt Cloud Team or Enterprise"
icon="dbt-bit"/>
-
-
diff --git a/website/docs/docs/dbt-cloud-apis/sl-graphql.md b/website/docs/docs/dbt-cloud-apis/sl-graphql.md
index b7d13d0d453..3555b211f4f 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-graphql.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-graphql.md
@@ -7,10 +7,10 @@ tags: [Semantic Layer, APIs]
-import LegacyInfo from '/snippets/_legacy-sl-callout.md';
-
-
+import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
+
+
diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
index aba309566f8..45b012c67c6 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
@@ -7,10 +7,10 @@ tags: [Semantic Layer, API]
-import LegacyInfo from '/snippets/_legacy-sl-callout.md';
-
-
+import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
+
+
The dbt Semantic Layer Java Database Connectivity (JDBC) API enables users to query metrics and dimensions using the JDBC protocol, while also providing standard metadata functionality.
diff --git a/website/docs/docs/dbt-cloud-apis/sl-manifest.md b/website/docs/docs/dbt-cloud-apis/sl-manifest.md
index 6ecac495869..eefa0bfc15e 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-manifest.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-manifest.md
@@ -9,10 +9,10 @@ pagination_next: null
-import LegacyInfo from '/snippets/_legacy-sl-callout.md';
-
-
+import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
+
+
dbt creates an [artifact](/reference/artifacts/dbt-artifacts) file called the _Semantic Manifest_ (`semantic_manifest.json`), which MetricFlow requires to build and run metric queries properly for the dbt Semantic Layer. This artifact contains comprehensive information about your dbt Semantic Layer. It is an internal file that acts as the integration point with MetricFlow.
@@ -97,4 +97,3 @@ Top-level keys for the semantic manifest are:
- [dbt Semantic Layer API](/docs/dbt-cloud-apis/sl-api-overview)
- [About dbt artifacts](/reference/artifacts/dbt-artifacts)
-
diff --git a/website/docs/docs/dbt-versions/core-upgrade/04-upgrading-to-v1.4.md b/website/docs/docs/dbt-versions/core-upgrade/04-upgrading-to-v1.4.md
index a946bdf369b..240f0b86de3 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/04-upgrading-to-v1.4.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/04-upgrading-to-v1.4.md
@@ -48,7 +48,7 @@ For more detailed information and to ask any questions, please visit [dbt-core/d
- [**Events and structured logging**](/reference/events-logging): dbt's event system got a makeover. Expect more consistency in the availability and structure of information, backed by type-safe event schemas.
- [**Python support**](/faqs/Core/install-python-compatibility): Python 3.11 was released in October 2022. It is officially supported in dbt-core v1.4, although full support depends also on the adapter plugin for your data platform. According to the Python maintainers, "Python 3.11 is between 10-60% faster than Python 3.10." We encourage you to try [`dbt parse`](/reference/commands/parse) with dbt Core v1.4 + Python 3.11, and compare the timing with dbt Core v1.3 + Python 3.10. Let us know what you find!
-- [**Metrics**](/docs/build/metrics): `time_grain` is optional, to provide better ergonomics around metrics that aren't time-bound.
+- [**Metrics**](/docs/build/build-metrics-intro): `time_grain` is optional, to provide better ergonomics around metrics that aren't time-bound.
- **dbt-Jinja context:** The [local_md5](/reference/dbt-jinja-functions/local_md5) context method will calculate an [MD5 hash](https://en.wikipedia.org/wiki/MD5) for use _within_ dbt. (Not to be confused with SQL md5!)
- [**Exposures**](/docs/build/exposures) can now depend on `metrics`.
- [**"Tarball" packages**](/docs/build/packages#internally-hosted-tarball-URL): Some organizations have security requirements to pull resources only from internal services. To address the need to install packages from hosted environments (such as Artifactory or cloud storage buckets), it's possible to specify any accessible URL where a compressed dbt package can be downloaded.
diff --git a/website/docs/docs/dbt-versions/core-upgrade/05-upgrading-to-v1.3.md b/website/docs/docs/dbt-versions/core-upgrade/05-upgrading-to-v1.3.md
index d9d97f17dc5..5a381b16928 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/05-upgrading-to-v1.3.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/05-upgrading-to-v1.3.md
@@ -49,7 +49,7 @@ GitHub discussion with details: [dbt-labs/dbt-core#6011](https://github.com/dbt-
## New and changed documentation
- **[Python models](/docs/build/python-models)** are natively supported in `dbt-core` for the first time, on data warehouses that support Python runtimes.
-- Updates made to **[Metrics](/docs/build/metrics)** reflect their new syntax for definition, as well as additional properties that are now available.
+- Updates made to **[Metrics](/docs/build/build-metrics-intro)** reflect their new syntax for definition, as well as additional properties that are now available.
- Plus, a few related updates to **[exposure properties](/reference/exposure-properties)**: `config`, `label`, and `name` validation.
- **[Custom `node_color`](/reference/resource-configs/docs.md)** in `dbt-docs`. For the first time, you can control the colors displayed in dbt's DAG. Want bronze, silver, and gold layers? It's at your fingertips.
diff --git a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.2.md b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.2.md
index 72a3e0c82ad..cd75e7f411b 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.2.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.2.md
@@ -34,7 +34,7 @@ See GitHub discussion [dbt-labs/dbt-core#5468](https://github.com/dbt-labs/dbt-c
## New and changed functionality
- **[Grants](/reference/resource-configs/grants)** are natively supported in `dbt-core` for the first time. That support extends to all standard materializations, and the most popular adapters. If you already use hooks to apply simple grants, we encourage you to use built-in `grants` to configure your models, seeds, and snapshots instead. This will enable you to [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) up your duplicated or boilerplate code.
-- **[Metrics](/docs/build/metrics)** now support an `expression` type (metrics-on-metrics), as well as a `metric()` function to use when referencing metrics from within models, macros, or `expression`-type metrics. For more information on how to use expression metrics, check out the [**`dbt_metrics` package**](https://github.com/dbt-labs/dbt_metrics)
+- **[Metrics](/docs/build/build-metrics-intro)** now support an `expression` type (metrics-on-metrics), as well as a `metric()` function to use when referencing metrics from within models, macros, or `expression`-type metrics. For more information on how to use expression metrics, check out the [**`dbt_metrics` package**](https://github.com/dbt-labs/dbt_metrics)
- **[dbt-Jinja functions](/reference/dbt-jinja-functions)** now include the [`itertools` Python module](/reference/dbt-jinja-functions/modules#itertools), as well as the [set](/reference/dbt-jinja-functions/set) and [zip](/reference/dbt-jinja-functions/zip) functions.
- **[Node selection](/reference/node-selection/syntax)** includes a [file selection method](/reference/node-selection/methods#the-file-method) (`-s model.sql`), and [yaml selector](/reference/node-selection/yaml-selectors) inheritance.
- **[Global configs](/reference/global-configs/about-global-configs)** now include CLI flag and environment variable settings for [`target-path`](/reference/project-configs/target-path) and [`log-path`](/reference/project-configs/log-path), which can be used to override the values set in `dbt_project.yml`
diff --git a/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md b/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md
index 0460186551d..0ea66980874 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md
@@ -70,7 +70,7 @@ Several under-the-hood changes from past minor versions, tagged with deprecation
## New features and changed documentation
-- Add [metrics](/docs/build/metrics), a new node type
+- Add [metrics](/docs/build/build-metrics-intro), a new node type
- [Generic tests](/best-practices/writing-custom-generic-tests) can be defined in `tests/generic` (new), in addition to `macros/` (as before)
- [Parsing](/reference/parsing): partial parsing and static parsing have been turned on by default.
- [Global configs](/reference/global-configs/about-global-configs) have been standardized. Related updates to [global CLI flags](/reference/global-cli-flags) and [`profiles.yml`](/docs/core/connect-data-platform/profiles.yml).
diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/legacy-sl.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/legacy-sl.md
new file mode 100644
index 00000000000..0eecfea623e
--- /dev/null
+++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/legacy-sl.md
@@ -0,0 +1,39 @@
+---
+title: "Deprecation: dbt Metrics and the legacy dbt Semantic Layer is now deprecated"
+description: "December 2023: For users on dbt v1.5 and lower, dbt Metrics and the legacy dbt Semantic Layer has been deprecated. Use the migration guide to migrate to and access the latest dbt Semantic Layer. "
+sidebar_label: "Deprecation: dbt Metrics and Legacy dbt Semantic Layer"
+sidebar_position: 09
+date: 2023-12-15
+---
+
+dbt Labs has deprecated dbt Metrics and the legacy dbt Semantic Layer, both supported on dbt version 1.5 or lower. This change starts on December 15th, 2023.
+
+This deprecation means dbt Metrics and the legacy Semantic Layer are no longer supported. We also removed the feature from the dbt Cloud user interface and documentation site.
+
+### Why this change?
+
+The [re-released dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl), powered by MetricFlow, offers enhanced flexibility, performance, and user experience, marking a significant advancement for the dbt community.
+
+### Key changes and impact
+
+- **Deprecation date** — The legacy Semantic Layer and dbt Metrics will be officially deprecated on December 15th, 2023.
+- **Replacement** — [MetricFlow](/docs/build/build-metrics-intro) replaces dbt Metrics for defining semantic logic. The `dbt_metrics` package will no longer be supported post-deprecation.
+- **New feature** — Exports replaces the materializing data with `metrics.calculate` functionality and will be available in dbt Cloud in December or January.
+
+
+### Breaking changes and recommendations
+
+- For users on dbt version 1.6 and lower with dbt Metrics and Snowflake proxy:
+ - **Impact**: Post-deprecation, queries using the proxy _will not_ run.
+ - **Action required:** _Immediate_ migration is necessary. Refer to the [dbt Semantic Layer migration guide](/guides/sl-migration?step=1)
+
+- For users on dbt version 1.6 and lower using dbt Metrics without Snowflake proxy:
+ - **Impact**: No immediate disruption, but the package will not receive updates or support after deprecation
+ - **Recommendation**: Plan migration to the re-released Semantic Layer for compatibility with dbt version 1.6 and higher.
+
+### Engage and support
+
+- Feedback and community support — Engage and share feedback with the dbt Labs team and dbt Community slack using channels like [#dbt-cloud-semantic-layer](https://getdbt.slack.com/archives/C046L0VTVR6) and [#dbt-metricflow](https://getdbt.slack.com/archives/C02CCBBBR1D). Or reach out to your dbt Cloud account representative.
+- Resources for upgrading — Refer to some additional info and resources to help you upgrade your dbt version:
+ - [Upgrade version in dbt Cloud](/docs/dbt-versions/upgrade-core-in-cloud)
+ - [Version migration guides](/docs/dbt-versions/core-upgrade)
diff --git a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
index be02fedb230..eb05fc75649 100644
--- a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
+++ b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
@@ -9,8 +9,13 @@ meta:
api_name: dbt Semantic Layer APIs
---
-
+
+import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
+
+
+
+
There are a number of data applications that seamlessly integrate with the dbt Semantic Layer, powered by MetricFlow, from business intelligence tools to notebooks, spreadsheets, data catalogs, and more. These integrations allow you to query and unlock valuable insights from your data ecosystem.
@@ -34,25 +39,3 @@ import AvailIntegrations from '/snippets/_sl-partner-links.md';
- [dbt Semantic Layer API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata)
- [Hex dbt Semantic Layer cells](https://learn.hex.tech/docs/logic-cell-types/transform-cells/dbt-metrics-cells) to set up SQL cells in Hex.
- [Resolve 'Failed APN'](/faqs/Troubleshooting/sl-alpn-error) error when connecting to the dbt Semantic Layer.
-
-
-
-
-
-import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
-
-
-
-A wide variety of data applications across the modern data stack natively integrate with the dbt Semantic Layer and dbt metrics — from Business Intelligence tools to notebooks, data catalogs, and more.
-
-The dbt Semantic Layer integrations are capable of querying dbt metrics, importing definitions, surfacing the underlying data in partner tools, and leveraging the dbt Server.
-
-For information on the partner integrations, their documentation, and more — refer to the [dbt Semantic Layer integrations](https://www.getdbt.com/product/semantic-layer-integrations) page.
-
-
-
-## Related docs
-
-- [About the dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl)
-
-
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
index 8387e934d84..ccbef5a6639 100644
--- a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
@@ -9,9 +9,13 @@ pagination_next: "docs/use-dbt-semantic-layer/quickstart-sl"
pagination_prev: null
---
-
+
+import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
+
+
+
The dbt Semantic Layer, powered by [MetricFlow](/docs/build/about-metricflow), simplifies the process of defining and using critical business metrics, like `revenue` in the modeling layer (your dbt project). By centralizing metric definitions, data teams can ensure consistent self-service access to these metrics in downstream data tools and applications. The dbt Semantic Layer eliminates duplicate coding by allowing data teams to define metrics on top of existing models and automatically handles data joins.
@@ -62,99 +66,3 @@ plan="dbt Cloud Team or Enterprise"
icon="dbt-bit"/>
-
-
-
-
-
-import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
-
-
-
-The dbt Semantic Layer allows your data teams to centrally define essential business metrics like `revenue`, `customer`, and `churn` in the modeling layer (your dbt project) for consistent self-service within downstream data tools like BI and metadata management solutions. The dbt Semantic Layer provides the flexibility to define metrics on top of your existing models and then query those metrics and models in your analysis tools of choice.
-
-Resulting in less duplicate coding for data teams and more consistency for data consumers.
-
-The dbt Semantic Layer has these main parts:
-
-- Define your metrics in version-controlled dbt project code using [MetricFlow](/docs/build/about-metricflow)
- * dbt_metrics is now deprecated
-- Import your metric definitions using the [Discovery API](/docs/dbt-cloud-apis/discovery-api)
-- Query your metric data with the dbt Proxy Server
-- Explore and analyze dbt metrics in downstream tools
-
-### What makes the dbt Semantic Layer different?
-
-The dbt Semantic Layer reduces code duplication and inconsistency regarding your business metrics. By moving metric definitions out of the BI layer and into the modeling layer, your data teams can feel confident that different business units are working from the same metric definitions, regardless of their tool of choice. If a metric definition changes in dbt, it’s refreshed everywhere it’s invoked and creates consistency across all applications. You can also use the dbt Semantic Layer to query models and use macros.
-
-
-## Prerequisites
-
-
-
-
-
-
-## Manage metrics
-
-:::info 📌
-
-New to dbt or metrics? Check out our [quickstart guide](/guides) to build your first dbt project! If you'd like to define your first metrics, try our [Jaffle Shop](https://github.com/dbt-labs/jaffle_shop_metrics) example project.
-
-:::
-
-If you're not sure whether to define a metric in dbt or not, ask yourself the following:
-
-> *Is this something our teams consistently need to report on?*
-
-An important business metric should be:
-
-- Well-defined (the definition is agreed upon throughout the entire organization)
-- Time-bound (able to be compared across time)
-
-A great example of this is **revenue**. It can be aggregated on multiple levels (weekly, monthly, and so on) and is key for the broader business to understand.
-
-- ✅ `Monthly recurring revenue` or `Weekly active users` or `Average order value`
-- ❌ `1-off experimental metric`
-
-
-### Design and define metrics
-
-You can design and define your metrics in `.yml` files nested under a metrics key in your dbt project. For more information, refer to these docs:
-
-- [dbt metrics](docs/build/metrics) for in-depth detail on attributes, filters, how to define and query your metrics, and [dbt-metrics package](https://github.com/dbt-labs/dbt_metrics)
-- [dbt Semantic Layer quickstart](/docs/use-dbt-semantic-layer/quickstart-semantic-layer) to get started
-
-## Related questions
-
-
- How do I migrate from the legacy Semantic Layer to the new one?
-
-
If you're using the legacy Semantic Layer, we highly recommend you
upgrade your dbt version to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated
migration guide for more info.
-
-
-
-
- How are you storing my data?
-
-
The dbt Semantic Layer doesn't store, cache, or log your data. On each query to the Semantic Layer, the resulting data passes through dbt Cloud servers where it's never stored, cached, or logged. The data from your data platform gets routed through dbt Cloud servers to your connecting data tool.
-
-
-
- Is the dbt Semantic Layer open source?
-
-
Some components of the dbt Semantic Layer are open source like dbt-core, the dbt_metrics package, and the BSL-licensed dbt-server. The dbt Proxy Server (what is actually compiling the dbt code) and the Discovery API are not open source.
-
-During Public Preview, the dbt Semantic Layer is open to all dbt Cloud tiers — Developer, Team, and Enterprise.
-
-
-
-
- Is there a dbt Semantic Layer discussion hub?
-
-
-
-
diff --git a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
index 62437f4ecd6..665260ed9f4 100644
--- a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
@@ -8,9 +8,6 @@ meta:
api_name: dbt Semantic Layer APIs
---
-
-
-
import CreateModel from '/snippets/_sl-create-semanticmodel.md';
import DefineMetrics from '/snippets/_sl-define-metrics.md';
import ConfigMetric from '/snippets/_sl-configure-metricflow.md';
@@ -18,6 +15,14 @@ import TestQuery from '/snippets/_sl-test-and-query-metrics.md';
import ConnectQueryAPI from '/snippets/_sl-connect-and-query-api.md';
import RunProdJob from '/snippets/_sl-run-prod-job.md';
+
+
+import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
+
+
+
+
+
The dbt Semantic Layer, powered by [MetricFlow](/docs/build/about-metricflow), simplifies defining and using critical business metrics. It centralizes metric definitions, eliminates duplicate coding, and ensures consistent self-service access to metrics in downstream tools.
@@ -94,268 +99,7 @@ import SlFaqs from '/snippets/_sl-faqs.md';
## Next steps
-- [Set up dbt Semantic Layer](docs/use-dbt-semantic-layer/setup-dbt-sl)
+- [Set up dbt Semantic Layer](/docs/use-dbt-semantic-layer/setup-sl)
- [Available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations)
- Demo on [how to define and query metrics with MetricFlow](https://www.loom.com/share/60a76f6034b0441788d73638808e92ac?sid=861a94ac-25eb-4fd8-a310-58e159950f5a)
- [Billing](/docs/cloud/billing)
-
-
-
-
-
-import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
-
-
-
-To try out the features of the dbt Semantic Layer, you first need to have a dbt project set up. This quickstart guide will lay out the following steps, and recommends a workflow that demonstrates some of its essential features:
-
-- Install dbt metrics package
- * Note: this package will be deprecated very soon and we highly recommend you to use the new [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl?version=1.6), available in dbt v 1.6 or higher.
-- Define metrics
-- Query, and run metrics
-- Configure the dbt Semantic Layer
-
-## Prerequisites
-
-To use the dbt Semantic Layer, you’ll need to meet the following:
-
-
-
-
-
-
-:::info 📌
-
-New to dbt or metrics? Check out our [quickstart guide](/guides) to build your first dbt project! If you'd like to define your first metrics, try our [Jaffle Shop](https://github.com/dbt-labs/jaffle_shop_metrics) example project.
-
-:::
-
-## Installing dbt metrics package
-
-The dbt Semantic Layer supports the calculation of metrics by using the [dbt metrics package](https://hub.getdbt.com/dbt-labs/metrics/latest/). You can install the dbt metrics package in your dbt project by copying the below code blocks.
-
-
-
-```yml
-packages:
- - package: dbt-labs/metrics
- version: [">=1.3.0", "<1.4.0"]
-```
-
-
-
-
-
-```yml
-packages:
- - package: dbt-labs/metrics
- version: [">=0.3.0", "<0.4.0"]
-```
-
-
-
-
-1. Paste the dbt metrics package code in your `packages.yml` file.
-2. Run the [`dbt deps` command](/reference/commands/deps) to install the package.
-3. If you see a successful result, you have now installed the dbt metrics package successfully!
-4. If you have any errors during the `dbt deps` command run, review the system logs for more information on how to resolve them. Make sure you use a dbt metrics package that’s compatible with your dbt environment version.
-
-
-
-## Design and define metrics
-
-Review our helpful metrics video below, which explains what metrics are, why they're important and how you can get started:
-
-
-
-Now that you've organized your metrics folder and files, you can define your metrics in `.yml` files nested under a `metrics` key.
-
-1. Add the metric definitions found in the [Jaffle Shop](https://github.com/dbt-labs/jaffle_shop_metrics) example to your dbt project. For example, to add an expenses metric, reference the following metrics you can define directly in your metrics folder:
-
-
-
-```sql
-version: 2
-
-metrics:
- - name: expenses
- label: Expenses
- model: ref('orders')
- description: "The total expenses of our jaffle business"
-
- calculation_method: sum
- expression: amount / 4
-
- timestamp: order_date
- time_grains: [day, week, month, year]
-
- dimensions:
- - customer_status
- - had_credit_card_payment
- - had_coupon_payment
- - had_bank_transfer_payment
- - had_gift_card_payment
-
- filters:
- - field: status
- operator: '='
- value: "'completed'"
-```
-
-
-
-
-```sql
-version: 2
-
-metrics:
- - name: expenses
- label: Expenses
- model: ref('orders')
- description: "The total expenses of our jaffle business"
-
- type: sum
- sql: amount / 4
-
- timestamp: order_date
- time_grains: [day, week, month, year]
-
- dimensions:
- - customer_status
- - had_credit_card_payment
- - had_coupon_payment
- - had_bank_transfer_payment
- - had_gift_card_payment
-
- filters:
- - field: status
- operator: '='
- value: "'completed'"
-```
-
-
-1. Click **Save** and then **Compile** the code.
-2. Commit and merge the code changes that contain the metric definitions.
-3. If you'd like to further design and define your own metrics, review the following documentation:
-
- - [dbt metrics](/docs/build/metrics) will provide you in-depth detail on attributes, properties, filters, and how to define and query metrics.
-
-## Develop and query metrics
-
-You can dynamically develop and query metrics directly in dbt and verify their accuracy _before_ running a job in the deployment environment by using the `metrics.calculate` and `metrics.develop` macros.
-
-To understand when and how to use the macros above, review [dbt metrics](/docs/build/metrics) and make sure you install the [dbt_metrics package](https://github.com/dbt-labs/dbt_metrics) first before using the above macros.
-
-:::info 📌
-
-**Note:** You will need access to dbt Cloud and the dbt Semantic Layer from your integrated partner tool of choice.
-
-:::
-
-## Run your production job
-
-Once you’ve defined metrics in your dbt project, you can perform a job run in your deployment environment to materialize your metrics. The deployment environment is only supported for the dbt Semantic Layer at this moment.
-
-1. Go to **Deploy** in the navigation and select **Jobs** to re-run the job with the most recent code in the deployment environment.
-2. Your metric should appear as a red node in the dbt Cloud IDE and dbt directed acyclic graphs (DAG).
-
-
-
-
-**What’s happening internally?**
-
-- Merging the code into your main branch allows dbt Cloud to pull those changes and builds the definition in the manifest produced by the run.
-- Re-running the job in the deployment environment helps materialize the models, which the metrics depend on, in the data platform. It also makes sure that the manifest is up to date.
-- Your dbt Discovery API pulls in the most recent manifest and allows your integration information to extract metadata from it.
-
-## Set up dbt Semantic Layer
-
-
-
-
-## Troubleshooting
-
-If you're encountering some issues when defining your metrics or setting up the dbt Semantic Layer, check out a list of answers to some of the questions or problems you may be experiencing.
-
-
- How are you storing my data?
-
-
The dbt Semantic Layer does not store, or cache, or log your data. On each query to the Semantic Layer, the resulting data passes through dbt Cloud servers where it is never stored, cached, or logged. The data from your data platform gets routed through dbt Cloud servers, to your connecting data tool.
-
-
-
- Is the dbt Semantic Layer open source?
-
-
Some components of the dbt Semantic Layer are open source like dbt-core, the dbt_metrics package, and the BSL-licensed dbt-server. The dbt Proxy Server (what is actually compiling the dbt code) and the Discovery API are not open sources.
-
-During Public Preview, the dbt Semantic Layer is open to all dbt Cloud tiers (Developer, Team, and Enterprise).
-
-- dbt Core users can define metrics in their dbt Core projects and calculate them using macros from the metrics package. To use the dbt Semantic Layer integrations, you will need to have a dbt Cloud account.
-- Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API.
-- Team and Enterprise accounts will be able to set up the Semantic Layer and Discovery API in the integrated partner tool to import metric definitions.
-
-
-
-
-
- The dbt_metrics_calendar_table
does not exist or is not authorized?
-
-
All metrics queries are dependent on either the
dbt_metrics_calendar_table
or a custom calendar set in the users
dbt_project.yml
. If you have not created this model in the database, these queries will fail and you'll most likely see the following error message:
-
-
Object DATABASE.SCHEMA.DBT_METRICS_DEFAULT_CALENDAR does not exist or not authorized.
-
-
Fix:
-
-
- - If developing locally, run
dbt run --select dbt_metrics_default_calendar
- - If you are using this in production, make sure that you perform a full
dbt build
or dbt run
. If you are running specific selects
in your production job, then you will not create this required model.
-
-
-
-
-
- Ephemeral Models - Object does not exist or is not authorized
-
-
Metrics cannot be defined on
ephemeral models because we reference the underlying table in the query that generates the metric so we need the table/view to exist in the database. If your table/view does not exist in your database, you might see this error message:
-
-
Object 'DATABASE.SCHEMA.METRIC_MODEL_TABLE' does not exist or not authorized.
-
-
Fix:
-
-- You will need to materialize the model that the metric is built on as a table/view/incremental.
-
-
-
-
-
-
- Mismatched Versions - metric type is ‘’
-
-
If you’re running
dbt_metrics
≥v0.3.2 but have
dbt-core
version ≥1.3.0, you’ll likely see these error messages:
-
-
-- Error message 1:
The metric NAME also references ... but its type is ''. Only metrics of type expression can reference other metrics.
-- Error message 2:
Unknown aggregation style: > in macro default__gen_primary_metric_aggregate (macros/sql_gen/gen_primary_metric_aggregate.sql)
-
-The reason you're experiencing this error is because we changed the
type
property of the metric spec in dbt-core v1.3.0. The new name is
calculation_method
and the package reflects that new name, so it isn’t finding any
type
when we try and run outdated code on it.
-
-
Fix:
-
-
-
-
-
-
-
-
-
-## Next steps
-
-Are you ready to define your own metrics and bring consistency to data consumers? Review the following documents to understand how to structure, define, and query metrics, and set up the dbt Semantic Layer:
-
-- [dbt metrics](/docs/build/metrics) for in-depth detail on attributes, properties, filters, and how to define and query metrics
-- [dbt Server repo](https://github.com/dbt-labs/dbt-server), which is a persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations.
-
-
diff --git a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
index 33f1f43f614..1016de1830a 100644
--- a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
@@ -6,8 +6,13 @@ sidebar_label: "Set up your Semantic Layer"
tags: [Semantic Layer]
---
-
+
+
+import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
+
+
+
With the dbt Semantic Layer, you can centrally define business metrics, reduce code duplication and inconsistency, create self-service in downstream tools, and more. Configure the dbt Semantic Layer in dbt Cloud to connect with your integrated partner tool.
@@ -35,60 +40,7 @@ import SlSetUp from '/snippets/_new-sl-setup.md';
8. You’re done 🎉! The semantic layer should is now enabled for your project.
-->
-
-
-
-
-import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
-
-
-
-With the dbt Semantic Layer, you can define business metrics, reduce code duplication and inconsistency, create self-service in downstream tools, and more. Configure the dbt Semantic Layer in dbt Cloud to connect with your integrated partner tool.
-
-## Prerequisites
-
-
-
-
-## Set up dbt Semantic Layer
-
-:::tip
-If you're using the legacy Semantic Layer, dbt Labs strongly recommends that you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the latest dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/sl-migration) for more info.
-
-:::
-
- * Team and Enterprise accounts can set up the Semantic Layer and [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated partner tool to import metric definitions.
- * Developer accounts can query the Proxy Server using SQL but won't be able to browse dbt metrics in external tools, which requires access to the Discovery API.
-
-
-1. Log in to your dbt Cloud account.
-2. Go to **Account Settings**, and then **Service Tokens** to create a new [service account API token](/docs/dbt-cloud-apis/service-tokens). Save your token somewhere safe.
-3. Assign permissions to service account tokens depending on the integration tool you choose. Refer to the [integration partner documentation](https://www.getdbt.com/product/semantic-layer-integrations) to determine the permission sets you need to assign.
-4. Go to **Deploy** > **Environments**, and select your **Deployment** environment.
-5. Click **Settings** on the top right side of the page.
-6. Click **Edit** on the top right side of the page.
-7. Select dbt version 1.2 or higher.
-8. Toggle the Semantic Layer **On**.
-9. Copy the full proxy server URL (like `https://eagle-hqya7.proxy.cloud.getdbt.com`) to connect to your [integrated partner tool](https://www.getdbt.com/product/semantic-layer-integrations).
-10. Use the URL in the data source configuration of the integrated partner tool.
-11. Use the data platform login credentials that make sense for how the data is consumed.
-
-:::info📌
-
-It is _not_ recommended that you use your dbt Cloud credentials due to elevated permissions. Instead, you can use your specific integration tool permissions.
-
-:::
-
-12. Set up the [Discovery API](/docs/dbt-cloud-apis/discovery-api) (Team and Enterprise accounts only) in the integrated partner tool to import the metric definitions. The [integrated partner tool](https://www.getdbt.com/product/semantic-layer-integrations) will treat the dbt Server as another data source (like a data platform). This requires:
-
-- The account ID, environment ID, and job ID (which is visible in the job URL)
-- An [API service token](/docs/dbt-cloud-apis/service-tokens) with job admin and metadata permissions
-- Add the items above to the relevant fields in your integration tool
-
-
-
-
## Related docs
diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
index 9aea2ab42b0..459fcfc487f 100644
--- a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
+++ b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
@@ -7,8 +7,13 @@ tags: [Semantic Layer]
pagination_next: null
---
+
-
+import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
+
+
+
+
The dbt Semantic Layer allows you to define metrics and use various interfaces to query them. The Semantic Layer does the heavy lifting to find where the queried data exists in your data platform and generates the SQL to make the request (including performing joins).
@@ -46,32 +51,3 @@ The following table compares the features available in dbt Cloud and source avai
import SlFaqs from '/snippets/_sl-faqs.md';
-
-
-
-
-
-import DeprecationNotice from '/snippets/_sl-deprecation-notice.md';
-
-
-
-## Product architecture
-
-The dbt Semantic Layer product architecture includes four primary components:
-
-| Components | Information | Developer plans | Team plans | Enterprise plans | License |
-| --- | --- | :---: | :---: | :---: | --- |
-| **[dbt project](/docs/build/metrics)** | Define models and metrics in dbt Core.
*Note, we will deprecate and no longer support the dbt_metrics package. | ✅ | ✅ | ✅ | Open source, Core |
-| **[dbt Server](https://github.com/dbt-labs/dbt-server)**| A persisted HTTP server that wraps dbt core to handle RESTful API requests for dbt operations. | ✅ | ✅ | ✅ | BSL |
-| **SQL Proxy** | Reverse-proxy that accepts dbt-SQL (SQL + Jinja like query models and metrics, use macros), compiles the query into pure SQL, and executes the query against the data platform. | ✅
_* Available during Public Preview only_ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
-| **[Discovery API](/docs/dbt-cloud-apis/discovery-api)** | Accesses metric definitions primarily via integrations and is the source of truth for objects defined in dbt projects (like models, macros, sources, metrics). The Discovery API is updated at the end of every dbt Cloud run. | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
-
-
-
-dbt Semantic Layer integrations will:
-
-- Leverage the Discovery API to fetch a list of objects and their attributes, like metrics
-- Generate a dbt-SQL statement
-- Then query the SQL proxy to evaluate the results of this statement
-
-
diff --git a/website/docs/docs/use-dbt-semantic-layer/tableau.md b/website/docs/docs/use-dbt-semantic-layer/tableau.md
index 0f12a75f468..689df12ec6a 100644
--- a/website/docs/docs/use-dbt-semantic-layer/tableau.md
+++ b/website/docs/docs/use-dbt-semantic-layer/tableau.md
@@ -9,7 +9,6 @@ sidebar_label: "Tableau (beta)"
The Tableau integration with the dbt Semantic Layer is a [beta feature](/docs/dbt-versions/product-lifecycles#dbt-cloud).
:::
-
The Tableau integration allows you to use worksheets to query the Semantic Layer directly and produce your dashboards with trusted data.
This integration provides a live connection to the dbt Semantic Layer through Tableau Desktop or Tableau Server.
diff --git a/website/docs/guides/dbt-models-on-databricks.md b/website/docs/guides/dbt-models-on-databricks.md
index 283ef9b4ba4..be1bb62049e 100644
--- a/website/docs/guides/dbt-models-on-databricks.md
+++ b/website/docs/guides/dbt-models-on-databricks.md
@@ -104,7 +104,7 @@ When you delete a record from a Delta table, it is a soft delete. What this mean
Now onto the most final layer — the gold marts that business stakeholders typically interact with from their preferred BI tool. The considerations here will be fairly similar to the silver layer except that these marts are more likely to handling aggregations. Further, you will likely want to be even more intentional about Z-Ordering these tables as SLAs tend to be lower with these direct stakeholder facing tables.
-In addition, these tables are well suited for defining [dbt metrics](/docs/build/metrics) on to ensure simplicity and consistency across your key business KPIs! Using the [dbt_metrics package](https://hub.getdbt.com/dbt-labs/metrics/latest/), you can query the metrics inside of your own dbt project even. With the upcoming Semantic Layer Integration, you can also then query the metrics in any of the partner integrated tools.
+In addition, these tables are well suited for defining [metrics](/docs/build/build-metrics-intro) on to ensure simplicity and consistency across your key business KPIs! Using the [MetricFlow](https://github.com/dbt-labs/metricflow), you can query the metrics inside of your own dbt project even. With the upcoming Semantic Layer Integration, you can also then query the metrics in any of the partner integrated tools.
### Filter rows in target and/or source
diff --git a/website/docs/guides/snowflake-qs.md b/website/docs/guides/snowflake-qs.md
index abb18276b97..5b4f9e3e2be 100644
--- a/website/docs/guides/snowflake-qs.md
+++ b/website/docs/guides/snowflake-qs.md
@@ -462,7 +462,7 @@ Sources make it possible to name and describe the data loaded into your warehous
5. Execute `dbt run`.
- The results of your `dbt run` will be exactly the same as the previous step. Your `stg_cusutomers` and `stg_orders`
+ The results of your `dbt run` will be exactly the same as the previous step. Your `stg_customers` and `stg_orders`
models will still query from the same raw data source in Snowflake. By using `source`, you can
test and document your raw data and also understand the lineage of your sources.
diff --git a/website/docs/reference/node-selection/methods.md b/website/docs/reference/node-selection/methods.md
index 2ffe0ea599e..61fd380e11b 100644
--- a/website/docs/reference/node-selection/methods.md
+++ b/website/docs/reference/node-selection/methods.md
@@ -244,7 +244,7 @@ dbt ls --select "+exposure:*" --resource-type source # list all sources upstr
### The "metric" method
-The `metric` method is used to select parent resources of a specified [metric](/docs/build/metrics). Use in conjunction with the `+` operator.
+The `metric` method is used to select parent resources of a specified [metric](/docs/build/build-metrics-intro). Use in conjunction with the `+` operator.
```bash
dbt build --select "+metric:weekly_active_users" # build all resources upstream of weekly_active_users metric
@@ -367,4 +367,4 @@ dbt list --select semantic_model:* # list all semantic models
dbt list --select +semantic_model:orders # list your semantic model named "orders" and all upstream resources
```
-
\ No newline at end of file
+
diff --git a/website/docs/sql-reference/aggregate-functions/sql-count.md b/website/docs/sql-reference/aggregate-functions/sql-count.md
index 42ece4b124f..d65c670df90 100644
--- a/website/docs/sql-reference/aggregate-functions/sql-count.md
+++ b/website/docs/sql-reference/aggregate-functions/sql-count.md
@@ -60,6 +60,6 @@ Some data warehouses, such as Snowflake and Google BigQuery, additionally suppor
We most commonly see queries using COUNT to:
- Perform initial data exploration on a dataset to understand dataset volume, primary key uniqueness, distribution of column values, and more.
- Calculate the counts of key business metrics (daily orders, customers created, etc.) in your data models or BI tool.
-- Define [dbt metrics](/docs/build/metrics) to aggregate key metrics.
+- Define [metrics](/docs/build/build-metrics-intro) to aggregate key metrics.
-This isn’t an extensive list of where your team may be using COUNT throughout your development work, dbt models, and BI tool logic, but it contains some common scenarios analytics engineers face day-to-day.
\ No newline at end of file
+This isn’t an extensive list of where your team may be using COUNT throughout your development work, dbt models, and BI tool logic, but it contains some common scenarios analytics engineers face day-to-day.
diff --git a/website/docs/sql-reference/aggregate-functions/sql-sum.md b/website/docs/sql-reference/aggregate-functions/sql-sum.md
index cb9235798d2..d6ca00c2daa 100644
--- a/website/docs/sql-reference/aggregate-functions/sql-sum.md
+++ b/website/docs/sql-reference/aggregate-functions/sql-sum.md
@@ -57,8 +57,8 @@ All modern data warehouses support the ability to use the SUM function (and foll
We most commonly see queries using SUM to:
- Calculate the cumulative sum of a metric across a customer/user id using a CASE WHEN statement (ex. `sum(case when order_array is not null then 1 else 0 end) as count_orders`)
-- Create [dbt metrics](/docs/build/metrics) for key business values, such as LTV
+- Create [dbt metrics](/docs/build/build-metrics-intro) for key business values, such as LTV
- Calculate the total of a field across a dimension (ex. total session time, total time spent per ticket) that you typically use in `fct_` or `dim_` models
- Summing clicks, spend, impressions, and other key ad reporting metrics in tables from ad platforms
-This isn’t an extensive list of where your team may be using SUM throughout your development work, dbt models, and BI tool logic, but it contains some common scenarios analytics engineers face day-to-day.
\ No newline at end of file
+This isn’t an extensive list of where your team may be using SUM throughout your development work, dbt models, and BI tool logic, but it contains some common scenarios analytics engineers face day-to-day.
diff --git a/website/docs/terms/dry.md b/website/docs/terms/dry.md
index b1649278cd2..ec1c9229567 100644
--- a/website/docs/terms/dry.md
+++ b/website/docs/terms/dry.md
@@ -42,8 +42,10 @@ Most teams have essential business logic that defines the successes and failures
By writing DRY definitions for key business logic and metrics that are referenced throughout a dbt project and/or BI (business intelligence) tool, data teams can create those single, unambiguous, and authoritative representations for their essential transformations. Gone are the days of 15 different definitions and values for churn, and in are the days of standardization and DRYness.
-:::note Experimental dbt Metrics!
-dbt v1.0 currently supports the use of experimental metrics, time series aggregations over a table that support zero or one dimensions. Using [dbt Metrics](/docs/build/metrics), data teams can define metric calculations, ownerships, and definitions in a YAML file that lives within their dbt project. dbt Metrics are in their experimental stage; if you’re interesting in learning more about dbt Metrics, please make sure to join the #dbt-metrics-and-server channel in the [dbt Community Slack](https://www.getdbt.com/community/join-the-community/).
+:::important dbt Semantic Layer, powered by MetricFlow
+
+The [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl), powered by [MetricFlow](/docs/build/about-metricflow), simplifies the process of defining and using critical business metrics, like revenue in the modeling layer (your dbt project). By centralizing metric definitions, data teams can ensure consistent self-service access to these metrics in downstream data tools and applications. The dbt Semantic Layer eliminates duplicate coding by allowing data teams to define metrics on top of existing models and automatically handles data joins.
+
:::
## Tools to help you write DRY code
diff --git a/website/sidebars.js b/website/sidebars.js
index 8d7be07d491..e9f7bbbd4b7 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -281,7 +281,6 @@ const sidebarSettings = {
"docs/build/jinja-macros",
"docs/build/sources",
"docs/build/exposures",
- "docs/build/metrics",
"docs/build/groups",
"docs/build/analyses",
],
@@ -556,8 +555,6 @@ const sidebarSettings = {
"docs/dbt-cloud-apis/discovery-schema-job",
"docs/dbt-cloud-apis/discovery-schema-job-model",
"docs/dbt-cloud-apis/discovery-schema-job-models",
- "docs/dbt-cloud-apis/discovery-schema-job-metric",
- "docs/dbt-cloud-apis/discovery-schema-job-metrics",
"docs/dbt-cloud-apis/discovery-schema-job-source",
"docs/dbt-cloud-apis/discovery-schema-job-sources",
"docs/dbt-cloud-apis/discovery-schema-job-seed",
diff --git a/website/snippets/_new-sl-setup.md b/website/snippets/_new-sl-setup.md
index 18e75c3278d..a02481db33d 100644
--- a/website/snippets/_new-sl-setup.md
+++ b/website/snippets/_new-sl-setup.md
@@ -8,7 +8,7 @@ You can set up the dbt Semantic Layer in dbt Cloud at the environment and projec
- You must have a successful run in your new environment.
:::tip
-If you're using the legacy Semantic Layer, dbt Labs strongly recommends that you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt version 1.6 or newer to use the latest dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/sl-migration) for details.
+If you've configured the legacy Semantic Layer, it has been deprecated, and dbt Labs strongly recommends that you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt version 1.6 or higher to use the latest dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/sl-migration) for details.
:::
1. In dbt Cloud, create a new [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) or use an existing environment on dbt 1.6 or higher.
diff --git a/website/snippets/_sl-deprecation-notice.md b/website/snippets/_sl-deprecation-notice.md
index 19bf19c2d90..610b1574b7d 100644
--- a/website/snippets/_sl-deprecation-notice.md
+++ b/website/snippets/_sl-deprecation-notice.md
@@ -1,7 +1,5 @@
:::info Deprecation of dbt Metrics and the legacy dbt Semantic Layer
-For users of the dbt Semantic Layer on version 1.5 or lower — Support for dbt Metrics and the legacy dbt Semantic Layer ends on December 15th, 2023. To access the latest features, migrate to the updated version using the [dbt Semantic Layer migration guide](/guides/sl-migration).
-
-
-After December 15th, dbt Labs will no longer support these deprecated features, they will be removed from the dbt Cloud user interface, and their documentation removed from the docs site.
+dbt Labs has deprecated dbt Metrics and the legacy dbt Semantic Layer, both supported on dbt version 1.5 or lower. These changes went into effect on December 15th, 2023.
+To migrate and access [MetricFlow](/docs/build/build-metrics-intro) or the re-released dbt Semantic Layer, use the [dbt Semantic Layer migration guide](/guides/sl-migration) and [upgrade your version](/docs/dbt-versions/upgrade-core-in-cloud) in dbt Cloud.
:::
diff --git a/website/snippets/_sl-faqs.md b/website/snippets/_sl-faqs.md
index def8f3837f6..092929e1066 100644
--- a/website/snippets/_sl-faqs.md
+++ b/website/snippets/_sl-faqs.md
@@ -1,33 +1,57 @@
-- **Is the dbt Semantic Layer open source?**
- - The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow.
+
- dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) plan.
+The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow.
- Refer to [Billing](https://docs.getdbt.com/docs/cloud/billing) for more information.
+dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) plan.
-- **How can open-source users use the dbt Semantic Layer?**
- - The dbt Semantic Layer requires the use of the dbt Cloud-provided service for coordinating query requests. Open source users who don’t use dbt Cloud can currently work around the lack of a service layer. They can do this by running `mf query --explain` in the command line. This command generates SQL code, which they can then use in their current systems for running and managing queries.
+Refer to Billing for more information.
+
+
+
+
+
+The dbt Semantic Layer requires the use of the dbt Cloud-provided service for coordinating query requests. Open source users who don’t use dbt Cloud can currently work around the lack of a service layer. They can do this by running `mf query --explain` in the command line. This command generates SQL code, which they can then use in their current systems for running and managing queries.
- As we refine MetricFlow’s API layers, some users may find it easier to set up their own custom service layers for managing query requests. This is not currently recommended, as the API boundaries around MetricFlow are not sufficiently well-defined for broad-based community use
+As we refine MetricFlow’s API layers, some users may find it easier to set up their own custom service layers for managing query requests. This is not currently recommended, as the API boundaries around MetricFlow are not sufficiently well-defined for broad-based community use
+
+
-- **Why is my query limited to 100 rows in the dbt Cloud CLI?**
-- The default `limit` for query issues from the dbt Cloud CLI is 100 rows. We set this default to prevent returning unnecessarily large data sets as the dbt Cloud CLI is typically used to query the dbt Semantic Layer during the development process, not for production reporting or to access large data sets. For most workflows, you only need to return a subset of the data.
+
+
+The default `limit` for query issues from the dbt Cloud CLI is 100 rows. We set this default to prevent returning unnecessarily large data sets as the dbt Cloud CLI is typically used to query the dbt Semantic Layer during the development process, not for production reporting or to access large data sets. For most workflows, you only need to return a subset of the data.
- However, you can change this limit if needed by setting the `--limit` option in your query. For example, to return 1000 rows, you can run `dbt sl list metrics --limit 1000`.
+However, you can change this limit if needed by setting the `--limit` option in your query. For example, to return 1000 rows, you can run `dbt sl list metrics --limit 1000`.
+
+
-- **Can I reference MetricFlow queries inside dbt models?**
- - dbt relies on Jinja macros to compile SQL, while MetricFlow is Python-based and does direct SQL rendering targeting at a specific dialect. MetricFlow does not support pass-through rendering of Jinja macros, so we can’t easily reference MetricFlow queries inside of dbt models.
+
+
+dbt relies on Jinja macros to compile SQL, while MetricFlow is Python-based and does direct SQL rendering targeting at a specific dialect. MetricFlow does not support pass-through rendering of Jinja macros, so we can’t easily reference MetricFlow queries inside of dbt models.
- Beyond the technical challenges that could be overcome, we see Metrics as the leaf node of your DAG, and a place for users to consume metrics. If you need to do additional transformation on top of a metric, this is usually a sign that there is more modeling that needs to be done.
+Beyond the technical challenges that could be overcome, we see Metrics as the leaf node of your DAG, and a place for users to consume metrics. If you need to do additional transformation on top of a metric, this is usually a sign that there is more modeling that needs to be done.
+
+
+
+
+
+You can use the upcoming feature, Exports, which will allow you to create a [pre-defined](/docs/build/saved-queries) MetricFlow query as a table in your data platform. This feature will be available to dbt Cloud customers only. This is because MetricFlow is primarily for query rendering while dispatching the relevant query and performing any DDL is the domain of the service layer on top of MetricFlow.
+
+
+
+
+
+If you're using the legacy Semantic Layer, we highly recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/sl-migration) for more info.
+
+
+
+
+
+User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
-- **Can I create tables in my data platform using MetricFlow?**
- - You can use the upcoming feature, Exports, which will allow you to create a [pre-defined](/docs/build/saved-queries) MetricFlow query as a table in your data platform. This feature will be available to dbt Cloud customers only. This is because MetricFlow is primarily for query rendering while dispatching the relevant query and performing any DDL is the domain of the service layer on top of MetricFlow.
+
-- **How do I migrate from the legacy Semantic Layer to the new one?**
- - If you're using the legacy Semantic Layer, we highly recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/sl-migration) for more info.
+
-- **How are you storing my data?**
- - User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
+Yes absolutely! Join the [dbt Slack community](https://getdbt.slack.com) and [#dbt-cloud-semantic-layer slack channel](https://getdbt.slack.com/archives/C046L0VTVR6) for all things related to the dbt Semantic Layer.
-- **Is there a dbt Semantic Layer discussion hub?**
- - Yes absolutely! Join the [dbt Slack community](https://getdbt.slack.com) and [#dbt-cloud-semantic-layer slack channel](https://getdbt.slack.com/archives/C046L0VTVR6) for all things related to the dbt Semantic Layer.
+
diff --git a/website/snippets/_sl-test-and-query-metrics.md b/website/snippets/_sl-test-and-query-metrics.md
index 2e9490f089d..b0db4bb520d 100644
--- a/website/snippets/_sl-test-and-query-metrics.md
+++ b/website/snippets/_sl-test-and-query-metrics.md
@@ -65,4 +65,3 @@ To streamline your metric querying process, you can connect to the [dbt Semantic
-
diff --git a/website/snippets/_v2-sl-prerequisites.md b/website/snippets/_v2-sl-prerequisites.md
index eb8b5fc27e4..99d8a945db6 100644
--- a/website/snippets/_v2-sl-prerequisites.md
+++ b/website/snippets/_v2-sl-prerequisites.md
@@ -1,6 +1,3 @@
-
-
-
- Have a dbt Cloud Team or Enterprise account. Suitable for both Multi-tenant and Single-tenant deployment.
- Note: Single-tenant accounts should contact their account representative for necessary setup and enablement.
- Have both your production and development environments running [dbt version 1.6 or higher](/docs/dbt-versions/upgrade-core-in-cloud).
@@ -11,30 +8,3 @@
- dbt Core or Developer accounts can define metrics but won't be able to dynamically query them.
- Understand [MetricFlow's](/docs/build/about-metricflow) key concepts, which powers the latest dbt Semantic Layer.
- Note that SSH tunneling for [Postgres and Redshift](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb) connections, [PrivateLink](/docs/cloud/secure/about-privatelink), and [Single sign-on (SSO)](/docs/cloud/manage-access/sso-overview) doesn't supported the dbt Semantic Layer yet.
-
-
-
-
-
-
-- Have a multi-tenant dbt Cloud instance, hosted in North America
-- Have both your production and development environments running dbt version 1.3 or higher
-- Use Snowflake data platform
-- Install the dbt metrics package version >=1.3.0, <1.4.0
in your dbt project
- * **Note** — After installing the dbt metrics package and updating the `packages.yml` file, make sure you run at least one model.
-- Set up the Discovery API in the integrated tool to import metric definitions
- * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
-
-
-
-
-
-- Have a multi-tenant dbt Cloud instance, hosted in North America
-- Have both your production and development environments running dbt version 1.2
-- Use Snowflake data platform
-- Install the dbt metrics package version >=0.3.0, <0.4.0
in your dbt project
- * **Note** — After installing the dbt metrics package and updating the `packages.yml` file, make sure you run at least one model.
-- Set up the Discovery API in the integrated tool to import metric definitions
- * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
-
-
diff --git a/website/snippets/sl-prerequisites.md b/website/snippets/sl-prerequisites.md
deleted file mode 100644
index 0c100c299b0..00000000000
--- a/website/snippets/sl-prerequisites.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-- Have a multi-tenant dbt Cloud instance, hosted in North America
-- Have both your production and development environments running dbt version 1.3 or higher
-- Use Snowflake data platform
-- Install the dbt metrics package version >=1.3.0, <1.4.0
in your dbt project
- * **Note** — After installing the dbt metrics package and updating the `packages.yml` file, make sure you run at least one model.
-- Set up the Discovery API in the integrated tool to import metric definitions
- * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
-- Recommended - Review the dbt metrics page
-
-
-
-
-
-- Have a multi-tenant dbt Cloud instance, hosted in North America
-- Have both your production and development environments running dbt version 1.3 or higher
-- Use Snowflake data platform
-- Install the dbt metrics package version >=1.3.0, <1.4.0
in your dbt project
- * **Note** — After installing the dbt metrics package and updating the `packages.yml` file, make sure you run at least one model.
-- Set up the Discovery API in the integrated tool to import metric definitions
- * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
-- Recommended - Review the dbt metrics page
-
-
-
-
-
-- Have a multi-tenant dbt Cloud instance, hosted in North America
-- Have both your production and development environments running dbt version 1.2
-- Use Snowflake data platform
-- Install the dbt metrics package version >=0.3.0, <0.4.0
in your dbt project
- * **Note** — After installing the dbt metrics package and updating the `packages.yml` file, make sure you run at least one model.
-- Set up the Discovery API in the integrated tool to import metric definitions
- * Developer accounts will be able to query the Proxy Server using SQL, but will not be able to browse pre-populated dbt metrics in external tools, which requires access to the Discovery API
-- Recommended - Review the dbt metrics page
-
-
diff --git a/website/snippets/sl-public-preview-banner.md b/website/snippets/sl-public-preview-banner.md
deleted file mode 100644
index e97527d356d..00000000000
--- a/website/snippets/sl-public-preview-banner.md
+++ /dev/null
@@ -1,7 +0,0 @@
-:::info 📌
-
-The dbt Semantic Layer is currently available in Public Preview for multi-tenant dbt Cloud accounts hosted in North America. If you log in via https://cloud.getdbt.com/, you can access the Semantic Layer. If you log in with [another URL](/docs/cloud/about-cloud/regions-ip-addresses), the dbt Semantic Layer will be available in the future.
-
-For more info, review the [Prerequisites](/docs/use-dbt-semantic-layer/dbt-semantic-layer#prerequisites), [Public Preview](/docs/use-dbt-semantic-layer/quickstart-semantic-layer#public-preview), and [Product architecture](/docs/use-dbt-semantic-layer/dbt-semantic-layer#product-architecture) sections.
-
-:::
diff --git a/website/snippets/sl-set-up-steps.md b/website/snippets/sl-set-up-steps.md
deleted file mode 100644
index 295253fb994..00000000000
--- a/website/snippets/sl-set-up-steps.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-Before you continue with the following steps, you **must** have a multi-tenant dbt Cloud account hosted in North America.
- * Team and Enterprise accounts can set up the Semantic Layer and [Discovery API](/docs/dbt-cloud-apis/discovery-api) in the integrated partner tool to import metric definition.
- * Developer accounts can query the Proxy Server using SQL but won't be able to browse dbt metrics in external tools, which requires access to the Discovery API.
-
-You can set up the dbt Semantic Layer in dbt Cloud at the environment level by following these steps:
-
-1. Login to your dbt Cloud account
-2. Go to **Account Settings**, and then **Service Tokens** to create a new [service account API token](/docs/dbt-cloud-apis/service-tokens). Save your token somewhere safe.
-3. Assign permissions to service account tokens depending on the integration tool you choose. You can review the [integration partner documentation](https://www.getdbt.com/product/semantic-layer-integrations) to determine the permission sets you need to assign.
-4. Go to **Deploy** and then **Environment**, and select your **Deployment** environment.
-5. Click on **Settings** on the top right side of the page.
-6. Click **Edit** on the top right side of the page.
-7. Select dbt version 1.2 or higher.
-8. Toggle the Semantic Layer **On**.
-9. Copy the full proxy server URL (like `https://eagle-hqya7.proxy.cloud.getdbt.com`) to connect to your [integrated partner tool](https://www.getdbt.com/product/semantic-layer-integrations).
-10. Use the URL in the data source configuration of the integrated partner tool.
-11. Use the data platform login credentials that make sense for how the data is consumed.
-
-:::info📌
-
-Note - It is _not_ recommended that you use your dbt Cloud credentials due to elevated permissions. Instead, you can use your specific integration tool permissions.
-
-:::
-
-12. Set up the [Discovery API](/docs/dbt-cloud-apis/discovery-api) (Team and Enterprise accounts only) in the integrated partner tool to import the metric definitions. The [integrated partner tool](https://www.getdbt.com/product/semantic-layer-integrations) will treat the dbt Server as another data source (like a data platform). This requires:
-
-- The account ID, environment ID, and job ID (visible in the job URL)
-- An [API service token](/docs/dbt-cloud-apis/service-tokens) with job admin and metadata permissions
-- Add the items above to the relevant fields in your integration tool
diff --git a/website/src/components/detailsToggle/index.js b/website/src/components/detailsToggle/index.js
new file mode 100644
index 00000000000..90464328f8b
--- /dev/null
+++ b/website/src/components/detailsToggle/index.js
@@ -0,0 +1,57 @@
+import React, { useState, useEffect } from 'react';
+import styles from './styles.module.css';
+
+function detailsToggle({ children, alt_header = null }) {
+ const [isOn, setOn] = useState(false);
+ const [hoverActive, setHoverActive] = useState(true);
+ const [hoverTimeout, setHoverTimeout] = useState(null);
+
+ const handleToggleClick = () => {
+ setOn(false);
+ setHoverActive(isOn); // Toggle hover activation based on current state
+ };
+
+ const handleMouseEnter = () => {
+ if (!hoverActive) return; // Ignore hover if disabled
+ const timeout = setTimeout(() => {
+ setOn(true);
+ }, 500); // 500ms delay
+ setHoverTimeout(timeout);
+ };
+
+ const handleMouseLeave = () => {
+ if (hoverActive && !isOn) {
+ clearTimeout(hoverTimeout);
+ setOn(false);
+ // isOn (false); can't be used here but setOn triggers a re-render
+ }
+ };
+
+ useEffect(() => {
+ return () => clearTimeout(hoverTimeout);
+ }, [hoverTimeout]);
+
+ return (
+
+
+
+ {alt_header}
+ {/* Visual disclaimer */}
+ Hover to view
+
+
+ {children}
+
+
+ );
+}
+
+export default detailsToggle;
diff --git a/website/src/components/detailsToggle/styles.module.css b/website/src/components/detailsToggle/styles.module.css
new file mode 100644
index 00000000000..446d3197128
--- /dev/null
+++ b/website/src/components/detailsToggle/styles.module.css
@@ -0,0 +1,51 @@
+:local(.link) {
+ color: var(--ifm-link-color);
+ transition: background-color 0.3s; /* Smooth transition for background color */
+}
+
+:local(.link:hover), :local(.link:focus) {
+ text-decoration: underline;
+ cursor: pointer;
+}
+
+:local(.disclaimer) {
+ font-size: 0.8em;
+ color: #666;
+ margin-left: 10px; /* Adjust as needed */
+}
+
+:local(.toggle) {
+ background-image: var(--ifm-menu-link-sublist-icon);
+ background-size: 1.25rem 1.25rem;
+ background-position: center;
+ content: ' ';
+ display: inline-block;
+ height: 1.25rem;
+ width: 1.25rem;
+ vertical-align: middle;
+ transition: transform 0.3s; /* Smooth transition for toggle icon */
+}
+
+:local(.toggleUpsideDown) {
+ transform: rotateX(180deg)
+}
+
+/* hack for unswizzled FAQ arrows */
+:local(html[data-theme='dark'] .toggle) {
+ filter: invert(1);
+}
+
+:local(.body) {
+ margin-left: 2em;
+ margin-bottom: 10px;
+ padding: 20px;
+ background-color: #e3f8f8;
+}
+
+:local(html[data-theme='dark'] .body) {
+ background: #333b47;
+}
+
+:local(.body > p:last-child) {
+ margin-bottom: 0px;
+}
diff --git a/website/src/theme/MDXComponents/index.js b/website/src/theme/MDXComponents/index.js
index dead3375489..2a412e198f1 100644
--- a/website/src/theme/MDXComponents/index.js
+++ b/website/src/theme/MDXComponents/index.js
@@ -43,6 +43,7 @@ import CommunitySpotlightList from '@site/src/components/communitySpotlightList'
import dbtEditor from '@site/src/components/dbt-editor';
import Icon from '@site/src/components/icon';
import Lifecycle from '@site/src/components/lifeCycle';
+import detailsToggle from '@site/src/components/detailsToggle';
const MDXComponents = {
head: MDXHead,
@@ -92,5 +93,6 @@ const MDXComponents = {
dbtEditor: dbtEditor,
Icon: Icon,
Lifecycle: Lifecycle,
+ detailsToggle: detailsToggle,
};
export default MDXComponents;
diff --git a/website/vercel.json b/website/vercel.json
index 5cdc2656948..981738b21d1 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -2,6 +2,21 @@
"cleanUrls": true,
"trailingSlash": false,
"redirects": [
+ {
+ "source": "/docs/dbt-cloud-apis/discovery-schema-job-metric",
+ "destination": "/docs/dbt-cloud-apis/discovery-schema-environment",
+ "permanent": true
+ },
+ {
+ "source": "/docs/dbt-cloud-apis/discovery-schema-job-metrics",
+ "destination": "/docs/dbt-cloud-apis/discovery-schema-environment",
+ "permanent": true
+ },
+ {
+ "source": "/docs/build/metrics",
+ "destination": "/docs/build/build-metrics-intro",
+ "permanent": true
+ },
{
"source": "/reference/test-configs",
"destination": "/reference/data-test-configs",