-Note the use of the `window` function to select the `first` value. For `last` and `avg`, we would replace the `first_value()` function in the generated SQL with `last_value()` and `average` respectively.
+Note the use of the `window` function to select the `first` value. For `last` and `average`, we would replace the `first_value()` function in the generated SQL with `last_value()` and `average` respectively.
```sql
-- re-aggregate metric via the group by
@@ -328,7 +328,7 @@ metrics:
measure: order_total
cumulative_type_params:
grain_to_date: month # Resets at the beginning of each month
- period_agg: first # Optional. Defaults to first. Accepted values: first|last|avg
+ period_agg: first # Optional. Defaults to first. Accepted values: first|last|average
```
diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md
index 9576cb46ecf..7871e70239e 100644
--- a/website/docs/docs/build/packages.md
+++ b/website/docs/docs/build/packages.md
@@ -2,7 +2,7 @@
title: "Packages"
id: "packages"
description: "Discover how dbt packages help modularize code and transform data efficiently. Learn about git packages, hub packages, private packages, and advanced package configurations."
-keywords: [dbt packages, dbt package, dbt private package, dbt data transformation, dbt libraries, how to add a package dbt project]
+keywords: [dbt package, private package, dbt private package, dbt data transformation, dbt clone, add dbt package]
---
@@ -190,11 +190,11 @@ packages:
If you're using dbt Cloud, the SSH key method will not work, but you can use the [HTTPS Git Token Method](https://docs.getdbt.com/docs/build/packages#git-token-method).
-#### Git Token Method
+#### Git token method
This method allows the user to clone via HTTPS by passing in a git token via an environment variable. Be careful of the expiration date of any token you use, as an expired token could cause a scheduled run to fail. Additionally, user tokens can create a challenge if the user ever loses access to a specific repo.
-:::info dbt Cloud Usage
+:::info dbt Cloud usage
If you are using dbt Cloud, you must adhere to the naming conventions for environment variables. Environment variables in dbt Cloud must be prefixed with either `DBT_` or `DBT_ENV_SECRET_``DBT_ENV_SECRET`. Environment variables keys are uppercased and case sensitive. When referencing `{{env_var('DBT_KEY')}}` in your project's code, the key must match exactly the variable defined in dbt Cloud's UI.
:::
diff --git a/website/docs/docs/cloud/billing.md b/website/docs/docs/cloud/billing.md
index 5d2edad2c39..ad0834c6c98 100644
--- a/website/docs/docs/cloud/billing.md
+++ b/website/docs/docs/cloud/billing.md
@@ -89,13 +89,17 @@ Viewing usage in the product is restricted to specific roles:
For an account-level view of usage, if you have access to the **Billing** and **Usage** pages, you can see an estimate of the usage for the month. In the Billing page of the **Account Settings**, you can see how your account tracks against its usage. You can also see which projects are building the most models.
-
+
-As a Team and Developer plan user, you can see how the account is tracking against the included models built. As an Enterprise plan user, you can see how much you have drawn down from your annual commit and how much remains.
+As a Team and Developer plan user, you can see how the account is tracking against the included models built. As an Enterprise plan user, you can see how much you have drawn down from your annual commit and how much remains.
-On each Project Home page, any user with access to that project can see how many models are built each month. From there, additional details on top jobs by models built can be found on each Environment page.
+On each **Project Home** page, any user with access to that project can see how many models are built each month. From there, additional details on top jobs by models built can be found on each **Environment** page.
-In addition, you can look at the Job Details page's Insights tab to show how many models are being built per month for that particular job and which models are taking the longest to build.
+
+
+In addition, you can look at the **Job Details** page's **Insights** tab to show how many models are being built per month for that particular job and which models are taking the longest to build.
+
+
Usage information is available to customers on consumption-based plans, and some usage visualizations might not be visible to customers on legacy plans. Any usage data shown in dbt Cloud is only an estimate of your usage, and there could be a delay in showing usage data in the product. Your final usage for the month will be visible on your monthly statements (statements applicable to Team and Enterprise plans).
diff --git a/website/docs/docs/cloud/connect-data-platform/about-connections.md b/website/docs/docs/cloud/connect-data-platform/about-connections.md
index 1a2b2a7c5cf..87a5a61062f 100644
--- a/website/docs/docs/cloud/connect-data-platform/about-connections.md
+++ b/website/docs/docs/cloud/connect-data-platform/about-connections.md
@@ -10,7 +10,7 @@ dbt Cloud can connect with a variety of data platform providers including:
- [AlloyDB](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb)
- [Amazon Redshift](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb)
- [Apache Spark](/docs/cloud/connect-data-platform/connect-apache-spark)
-- [Azure Synapse Analytics](/docs/cloud/connect-data-platform/connect-azure-synapse-analytics)
+- [Azure Synapse Analytics](/docs/cloud/connect-data-platform/connect-azure-synapse-analytics)
- [Databricks](/docs/cloud/connect-data-platform/connect-databricks)
- [Google BigQuery](/docs/cloud/connect-data-platform/connect-bigquery)
- [Microsoft Fabric](/docs/cloud/connect-data-platform/connect-microsoft-fabric)
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-azure-synapse-analytics.md b/website/docs/docs/cloud/connect-data-platform/connect-azure-synapse-analytics.md
index d7b2d3ae47d..d19aa40df4d 100644
--- a/website/docs/docs/cloud/connect-data-platform/connect-azure-synapse-analytics.md
+++ b/website/docs/docs/cloud/connect-data-platform/connect-azure-synapse-analytics.md
@@ -4,7 +4,7 @@ description: "Configure Azure Synapse Analytics connection."
sidebar_label: "Connect Azure Synapse Analytics"
---
-# Connect Azure Synapse Analytics
+# Connect Azure Synapse Analytics
## Supported authentication methods
The supported authentication methods are:
diff --git a/website/docs/docs/cloud/git/setup-azure.md b/website/docs/docs/cloud/git/setup-azure.md
index 3438a42772a..564bee2239a 100644
--- a/website/docs/docs/cloud/git/setup-azure.md
+++ b/website/docs/docs/cloud/git/setup-azure.md
@@ -68,7 +68,7 @@ A Microsoft Entra ID admin needs to add another redirect URI to your Entra ID ap
2. Select the link next to **Redirect URIs**
3. Click **Add URI** and add the URI, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan:
-`https://YOUR_ACCESS_URL/complete/microsoft_entra_id_service_user`
+`https://YOUR_ACCESS_URL/complete/azure_active_directory_service_user`
4. Click **Save**.
@@ -374,7 +374,12 @@ A dbt Cloud account admin with access to the service user's Azure DevOps account
Once connected, dbt Cloud displays the email address of the service user so you know which user's permissions are enabling headless actions in deployment environments. To change which account is connected, disconnect the profile in dbt Cloud, sign into the alternative Azure DevOps service account, and re-link the account in dbt Cloud.
:::info Personal Access Tokens (PATs)
-dbt Cloud generates temporary access tokens called Full-scoped PATs for service users to access APIs related to their dbt Cloud project. These tokens are only valid for a short period of 5 minutes and become invalid after they are used to make an API call.
+dbt Cloud leverages the service user to generate temporary access tokens called [PATs](https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?toc=%2Fazure%2Fdevops%2Fmarketplace-extensibility%2Ftoc.json&view=azure-devops&tabs=Windows).
-The Azure DevOps Administrator can limit the creation of full-scoped PATs by enabling a policy that restricts users to a custom-defined set of scopes. By default, this policy is set to **off**, but enabling it will cause project setup to fail with an error. After disabling this policy and successfully setting up your project, if you wish to use finer-scoped permissions, some features such as webhooks for CI may be lost, so we recommend the service user has full-scoped PATs. To exclude the dbt Cloud service user from the global PAT policy, add them to the allow list as part of your security policy.
+These tokens are limited in scope, are only valid for 5 minutes, and become invalid after a single API call.
+
+These tokens are limited to the following [scopes](https://learn.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/oauth?view=azure-devops):
+- `vso.code_full`: Grants full access to source code and version control metadata (commits, branches, and so on). Also grants the ability to create and manage code repositories, create and manage pull requests and code reviews, and receive notifications about version control events with service hooks. Also includes limited support for Client OM APIs.
+- `vso.project`: Grants the ability to read projects and teams.
+- `vso.build_execute`: Grants the ability to access build artifacts, including build results, definitions, and requests, and the ability to queue a build, update build properties, and the ability to receive notifications about build events with service hooks.
:::
diff --git a/website/docs/docs/collaborate/govern/project-dependencies.md b/website/docs/docs/collaborate/govern/project-dependencies.md
index 83a2b966ee1..b8c5aa5a74f 100644
--- a/website/docs/docs/collaborate/govern/project-dependencies.md
+++ b/website/docs/docs/collaborate/govern/project-dependencies.md
@@ -10,15 +10,21 @@ keyword: dbt mesh, project dependencies, ref, cross project ref, project depende
For a long time, dbt has supported code reuse and extension by installing other projects as [packages](/docs/build/packages). When you install another project as a package, you are pulling in its full source code, and adding it to your own. This enables you to call macros and run models defined in that other project.
-While this is a great way to reuse code, share utility macros, and establish a starting point for common transformations, it's not a great way to enable collaboration across teams and at scale, especially at larger organizations.
+While this is a great way to reuse code, share utility macros, and establish a starting point for common transformations, it's not a great way to enable collaboration across teams and at scale, especially in larger organizations.
This year, dbt Labs is introducing an expanded notion of `dependencies` across multiple dbt projects:
- **Packages** — Familiar and pre-existing type of dependency. You take this dependency by installing the package's full source code (like a software library).
- **Projects** — A _new_ way to take a dependency on another project. Using a metadata service that runs behind the scenes, dbt Cloud resolves references on-the-fly to public models defined in other projects. You don't need to parse or run those upstream models yourself. Instead, you treat your dependency on those models as an API that returns a dataset. The maintainer of the public model is responsible for guaranteeing its quality and stability.
## Prerequisites
-- Available in [dbt Cloud Enterprise](https://www.getdbt.com/pricing). If you have an Enterprise account, you can unlock these features by designating a [public model](/docs/collaborate/govern/model-access) and adding a [cross-project ref](#how-to-write-cross-project-ref).
-- Set your development and deployment [environments](/docs/dbt-cloud-environments) to use [dbt version](/docs/dbt-versions/core) 1.6 or later. You can also opt [Keep on latest version](/docs/dbt-versions/upgrade-dbt-version-in-cloud#keep-on-latest-version) to always use the latest version of dbt.
+- Available in [dbt Cloud Enterprise](https://www.getdbt.com/pricing). If you have an Enterprise account, you can unlock these features by designating a [public model](/docs/collaborate/govern/model-access) and adding a [cross-project ref](#how-to-write-cross-project-ref).
+- Use a supported version of dbt (v1.6, v1.7, or go versionless with "[Keep on latest version](/docs/dbt-versions/upgrade-dbt-version-in-cloud#keep-on-latest-version)") for both the upstream ("producer") project and the downstream ("consumer") project.
+- Define models in an upstream ("producer") project that are configured with [`access: public`](/reference/resource-configs/access). You need at least one successful job run after defining their `access`.
+- Define a deployment environment in the upstream ("producer") project [that is set to be your Production environment](/docs/deploy/deploy-environments#set-as-production-environment), and ensure it has at least one successful job run in that environment.
+- Each project `name` must be unique in your dbt Cloud account. For example, if you have a dbt project (codebase) for the `jaffle_marketing` team, you should not create separate projects for `Jaffle Marketing - Dev` and `Jaffle Marketing - Prod`. That isolation should instead be handled at the environment level.
+ - We are adding support for environment-level permissions and data warehouse connections; please contact your dbt Labs account team for beta access.
+- The `dbt_project.yml` file is case-sensitive, which means the project name must exactly match the name in your `dependencies.yml`. For example, if your project name is `jaffle_marketing`, you should use `jaffle_marketing` (not `JAFFLE_MARKETING`) in all related files.
+
import UseCaseInfo from '/snippets/_packages_or_dependencies.md';
@@ -27,14 +33,6 @@ import UseCaseInfo from '/snippets/_packages_or_dependencies.md';
Refer to the [FAQs](#faqs) for more info.
-## Prerequisites
-
-In order to add project dependencies and resolve cross-project `ref`, you must:
-- Use a supported version of dbt (v1.6, v1.7, or go versionless with "Keep on latest version") for both the upstream ("producer") project and the downstream ("consumer") project.
-- Define models in an upstream ("producer") project that are configured with [`access: public`](/reference/resource-configs/access). You need at least one successful job run after defining their `access`.
-- Define a deployment environment in the upstream ("producer") project [that is set to be your Production environment](/docs/deploy/deploy-environments#set-as-production-environment), and ensure it has at least one successful job run in that environment.
-- Each project `name` must be unique in your dbt Cloud account. For example, if you have a dbt project (codebase) for the `jaffle_marketing` team, you should not create separate projects for `Jaffle Marketing - Dev` and `Jaffle Marketing - Prod`. That isolation should instead be handled at the environment level. To that end, we are working on adding support for environment-level permissions and data warehouse connections; reach out to your dbt Labs account team for beta access in May/June 2024.
-
## Example
As an example, let's say you work on the Marketing team at the Jaffle Shop. The name of your team's project is `jaffle_marketing`:
@@ -48,7 +46,7 @@ name: jaffle_marketing
As part of your modeling of marketing data, you need to take a dependency on two other projects:
-- `dbt_utils` as a [package](#packages-use-case): A collection of utility macros that you can use while writing the SQL for your own models. This package is, open-source public, and maintained by dbt Labs.
+- `dbt_utils` as a [package](#packages-use-case): A collection of utility macros you can use while writing the SQL for your own models. This package is open-source public and maintained by dbt Labs.
- `jaffle_finance` as a [project use-case](#projects-use-case): Data models about the Jaffle Shop's revenue. This project is private and maintained by your colleagues on the Finance team. You want to select from some of this project's final models, as a starting point for your own work.
@@ -59,7 +57,7 @@ packages:
version: 1.1.1
projects:
- - name: jaffle_finance # matches the 'name' in their 'dbt_project.yml'
+ - name: jaffle_finance # case sensitive and matches the 'name' in the 'dbt_project.yml'
```
diff --git a/website/docs/docs/dbt-cloud-apis/project-state.md b/website/docs/docs/dbt-cloud-apis/project-state.md
index 71b367cb5ad..1007f31effe 100644
--- a/website/docs/docs/dbt-cloud-apis/project-state.md
+++ b/website/docs/docs/dbt-cloud-apis/project-state.md
@@ -59,18 +59,24 @@ Most Discovery API use cases will favor the _applied state_ since it pertains to
## Affected states by node type
+The following table shows the states of dbt nodes and how they are affected by the Discovery API.
+
| Node | Executed in DAG | Created by execution | Exists in database | Lineage | States |
|-----------------------------------------------|------------------|----------------------|--------------------|-----------------------|----------------------|
-| [Model](/docs/build/models) | Yes | Yes | Yes | Upstream & downstream | Applied & definition |
-| [Source](/docs/build/sources) | Yes | No | Yes | Downstream | Applied & definition |
-| [Seed](/docs/build/seeds) | Yes | Yes | Yes | Downstream | Applied & definition |
-| [Snapshot](/docs/build/snapshots) | Yes | Yes | Yes | Upstream & downstream | Applied & definition |
+| [Analysis](/docs/build/analyses) | No | No | No | Upstream | Definition |
| [Data test](/docs/build/data-tests) | Yes | Yes | No | Upstream | Applied & definition |
| [Exposure](/docs/build/exposures) | No | No | No | Upstream | Definition |
-| [Metric](/docs/build/metrics-overview) | No | No | No | Upstream & downstream | Definition |
-| [Semantic model](/docs/build/semantic-models) | No | No | No | Upstream & downstream | Definition |
| [Group](/docs/build/groups) | No | No | No | Downstream | Definition |
| [Macro](/docs/build/jinja-macros) | Yes | No | No | N/A | Definition |
+| [Metric](/docs/build/metrics-overview) | No | No | No | Upstream & downstream | Definition |
+| [Model](/docs/build/models) | Yes | Yes | Yes | Upstream & downstream | Applied & definition |
+| [Saved queries](/docs/build/saved-queries)
(not in API) | N/A | N/A | N/A | N/A | N/A |
+| [Seed](/docs/build/seeds) | Yes | Yes | Yes | Downstream | Applied & definition |
+| [Semantic model](/docs/build/semantic-models) | No | No | No | Upstream & downstream | Definition |
+| [Snapshot](/docs/build/snapshots) | Yes | Yes | Yes | Upstream & downstream | Applied & definition |
+| [Source](/docs/build/sources) | Yes | No | Yes | Downstream | Applied & definition |
+| [Unit tests](/docs/build/unit-tests) | Yes | Yes | No | Downstream | Definition |
+
## Caveats about state/metadata updates
diff --git a/website/docs/docs/get-started-dbt.md b/website/docs/docs/get-started-dbt.md
index b040fb2bb24..1aba57962fd 100644
--- a/website/docs/docs/get-started-dbt.md
+++ b/website/docs/docs/get-started-dbt.md
@@ -23,7 +23,7 @@ Learn more about [dbt Cloud features](/docs/cloud/about-cloud/dbt-cloud-feature
diff --git a/website/docs/guides/azure-synapse-analytics-qs.md b/website/docs/guides/azure-synapse-analytics-qs.md
index 052127a9bd8..ea70030d351 100644
--- a/website/docs/guides/azure-synapse-analytics-qs.md
+++ b/website/docs/guides/azure-synapse-analytics-qs.md
@@ -1,5 +1,5 @@
---
-title: "Quickstart for dbt Cloud and Azure Synapse Analytics (Preview)"
+title: "Quickstart for dbt Cloud and Azure Synapse Analytics"
id: "azure-synapse-analytics"
level: 'Beginner'
icon: 'azure-synapse-analytics'
@@ -304,4 +304,4 @@ Later, you can connect your business intelligence (BI) tools to these views and
-
\ No newline at end of file
+
diff --git a/website/docs/guides/mesh-qs.md b/website/docs/guides/mesh-qs.md
index 529dc4b450c..dc9be8b6af4 100644
--- a/website/docs/guides/mesh-qs.md
+++ b/website/docs/guides/mesh-qs.md
@@ -204,7 +204,7 @@ Now that you've set up the foundational project, let's start building the data a
c.last_name,
co.first_order_date,
-- Note that we've used a macro for this so that the appropriate DATEDIFF syntax is used for each respective data platform
- {{ dbt_utils.datediff('first_order_date', 'order_date', 'day') }} as days_as_customer_at_purchase
+ {{ datediff('first_order_date', 'order_date', 'day') }} as days_as_customer_at_purchase
from orders o
left join customers c using (customer_id)
left join customer_orders co using (customer_id)
@@ -306,9 +306,10 @@ In this section, you will set up the downstream project, "Jaffle | Finance", and
1. If you’ve also started with a new git repo, click **Initialize dbt project** under the **Version control** section.
2. Delete the `models/example` folder
-3. Navigate to the `dbt_project.yml` file and remove lines 39-42 (the `my_new_project` model reference).
-4. In the **File Explorer**, hover over the project directory, click the **...** and Select **Create file**.
-5. Name the file `dependencies.yml`.
+3. Navigate to the dbt_project.yml file and rename the project (line 5) from `my_new_project` to `finance`
+4. Navigate to the `dbt_project.yml` file and remove lines 39-42 (the `my_new_project` model reference).
+5. In the **File Explorer**, hover over the project directory, click the **...** and Select **Create file**.
+6. Name the file `dependencies.yml`.
@@ -464,7 +465,7 @@ In this section, you will set up model versions by the Data Analytics team as th
- The `is_return` column
- The two model `versions`
- A `latest_version` to indicate which model is the latest (and should be used by default, unless specified otherwise)
- - A `deprecation_date` to version 1 as well to indicate
+ - A `deprecation_date` to version 1 as well to indicate when the model will be deprecated.
4. It should now read as follows:
diff --git a/website/docs/reference/programmatic-invocations.md b/website/docs/reference/programmatic-invocations.md
index ed9102075df..09e41b1789f 100644
--- a/website/docs/reference/programmatic-invocations.md
+++ b/website/docs/reference/programmatic-invocations.md
@@ -89,6 +89,24 @@ res = dbt.invoke(cli_args)
Register `callbacks` on dbt's `EventManager`, to access structured events and enable custom logging. The current behavior of callbacks is to block subsequent steps from proceeding; this functionality is not guaranteed in future versions.
+
+
+```python
+from dbt.cli.main import dbtRunner
+from dbt_common.events.base_types import EventMsg
+
+def print_version_callback(event: EventMsg):
+ if event.info.name == "MainReportVersion":
+ print(f"We are thrilled to be running dbt{event.data.version}")
+
+dbt = dbtRunner(callbacks=[print_version_callback])
+dbt.invoke(["list"])
+```
+
+
+
+
+
```python
from dbt.cli.main import dbtRunner
from dbt.events.base_types import EventMsg
@@ -101,6 +119,8 @@ dbt = dbtRunner(callbacks=[print_version_callback])
dbt.invoke(["list"])
```
+
+
### Overriding parameters
Pass in parameters as keyword arguments, instead of a list of CLI-style strings. At present, dbt will not do any validation or type coercion on your inputs. The subcommand must be specified, in a list, as the first positional argument.
diff --git a/website/docs/reference/resource-configs/redshift-configs.md b/website/docs/reference/resource-configs/redshift-configs.md
index 7c66be66f92..78b288083fa 100644
--- a/website/docs/reference/resource-configs/redshift-configs.md
+++ b/website/docs/reference/resource-configs/redshift-configs.md
@@ -228,7 +228,7 @@ Redshift supports [backup](https://docs.aws.amazon.com/redshift/latest/mgmt/work
This parameter identifies if the materialized view should be backed up as part of the cluster snapshot.
By default, a materialized view will be backed up during a cluster snapshot.
dbt cannot monitor this parameter as it is not queryable within Redshift.
-If the value is changed, the materialized view will need to go through a `--full-refresh` in order to set it.
+If the value changes, the materialized view will need to go through a `--full-refresh` to set it.
Learn more about these parameters in Redshift's [docs](https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-create-sql-command.html#mv_CREATE_MATERIALIZED_VIEW-parameters).
@@ -246,7 +246,7 @@ Find more information about materialized view limitations in Redshift's [docs](h
#### Changing materialization from "materialized_view" to "table" or "view"
Swapping a materialized view to a table or view is not supported.
-You must manually drop the existing materialized view in the data warehouse prior to calling `dbt run`.
+You must manually drop the existing materialized view in the data warehouse before calling `dbt run`.
Normally, re-running with the `--full-refresh` flag would resolve this, but not in this case.
This would only need to be done once as the existing object would then be a materialized view.
@@ -262,85 +262,8 @@ The workaround is to execute `DROP MATERIALIZED VIEW my_mv CASCADE` on the data
## Unit test limitations
-Redshift doesn't support Unit tests when the SQL in the common table expression (CTE) contains functions such as `LISTAGG`, `MEDIAN`, `PERCENTILE_CONT`, etc. These functions must be executed against a user-created table. dbt combines given rows to be part of the CTE, which Redshift does not support. For unit tests to function properly in this scenario, creating temporary tables for the unit tests to reference is a good workaround.
+Redshift doesn't support [unit tests](/docs/build/unit-tests) when the SQL in the common table expression (CTE) contains functions such as `LISTAGG`, `MEDIAN`, `PERCENTILE_CONT`, and so on. These functions must be executed against a user-created table. dbt combines given rows to be part of the CTE, which Redshift does not support.
-The following query illustrates the limitation:
-
-```sql
-
-create temporary table "test_tmpxxxxx" as (
- with test_fixture as (
- select
- cast(1000 as integer) as id,
- cast('menu1' as character varying(500)) as name,
- cast( 1 as integer) as quantity
- union all
- select
- cast(1001 as integer) as id,
- cast('menu2' as character varying(500)) as name,
- cast( 1 as integer) as quantity
- union all
- select
- cast(1003 as integer) as id,
- cast('menu1' as character varying(500)) as name,
- cast( 1 as integer) as quantity
- ),
- agg as (
- SELECT
- LISTAGG(name || ' x ' || quantity, ',') AS option_name_list,
- id
- FROM test_fixture
- GROUP BY id
- )
- select * from agg
-);
-
-```
-This query results in the error:
-
-```bash
-
-[XX000] ERROR: One or more of the used functions must be applied on at least one user created tables. Examples of user table only functions are LISTAGG, MEDIAN, PERCENTILE_CONT, etc
-
-```
-
-However, the following query works as expected:
-
-```sql
-
-create temporary table "test_tmp1234" as (
- SELECT
- cast(1000 as integer) as id,
- cast('menu1' as character varying(500)) as name,
- cast( 1 as integer) as quantity
- union all
- select
- cast(1001 as integer) as id,
- cast('menu2' as character varying(500)) as name,
- cast( 1 as integer) as quantity
- union all
- select
- cast(1000 as integer) as id,
- cast('menu1' as character varying(500)) as name,
- cast( 1 as integer) as quantity
-);
-
-with agg as (
- SELECT
- LISTAGG(name || ' x ' || quantity, ',') AS option_name_list,
- id
- FROM test_tmp1234
- GROUP BY id
-)
-select * from agg;
-
-```
-
-When all given rows are created as a temporary table first, then running the test by referring to the temporary tables results in a successful run.
-
-In short, separate the unit tests into two steps:
-1. Prepare test fixtures by creating temporary tables.
-2. Run unit test query by referring to the temporary tables.
+In order to support this pattern in the future, dbt would need to "materialize" the input fixtures as tables, rather than interpolating them as CTEs. If you are interested in this functionality, we'd encourage you to participate in this issue in GitHub: [dbt-labs/dbt-core#8499](https://github.com/dbt-labs/dbt-core/issues/8499)
-
diff --git a/website/static/img/docs/building-a-dbt-project/billing-job-page.jpg b/website/static/img/docs/building-a-dbt-project/billing-job-page.jpg
new file mode 100644
index 00000000000..3744fe8c083
Binary files /dev/null and b/website/static/img/docs/building-a-dbt-project/billing-job-page.jpg differ
diff --git a/website/static/img/docs/building-a-dbt-project/billing-project-page.jpg b/website/static/img/docs/building-a-dbt-project/billing-project-page.jpg
new file mode 100644
index 00000000000..8569ea4d2f2
Binary files /dev/null and b/website/static/img/docs/building-a-dbt-project/billing-project-page.jpg differ
diff --git a/website/static/img/docs/building-a-dbt-project/billing-usage-page.jpg b/website/static/img/docs/building-a-dbt-project/billing-usage-page.jpg
new file mode 100644
index 00000000000..d9a9f35b803
Binary files /dev/null and b/website/static/img/docs/building-a-dbt-project/billing-usage-page.jpg differ
diff --git a/website/static/img/docs/building-a-dbt-project/billing.jpg b/website/static/img/docs/building-a-dbt-project/billing.jpg
deleted file mode 100644
index 84d117b758f..00000000000
Binary files a/website/static/img/docs/building-a-dbt-project/billing.jpg and /dev/null differ