diff --git a/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md b/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md
index eae8d595ca5..e871687d8cd 100644
--- a/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md
+++ b/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md
@@ -10,6 +10,8 @@ hide_table_of_contents: false
date: 2022-05-06
is_featured: true
+keywords:
+ - dbt core pipeline, slim ci pipeline, slim cd pipeline, bitbucket
---
diff --git a/website/dbt-versions.js b/website/dbt-versions.js
index e5a2b9f4290..871c3ce601e 100644
--- a/website/dbt-versions.js
+++ b/website/dbt-versions.js
@@ -10,16 +10,14 @@
* @property {string} EOLDate "End of Life" date which is used to show the EOL banner
* @property {boolean} isPrerelease Boolean used for showing the prerelease banner
* @property {string} customDisplay Allows setting a custom display name for the current version
+ *
+ * customDisplay for dbt Cloud should be a version ahead of latest dbt Core release (GA or beta).
*/
exports.versions = [
{
version: "1.9.1",
customDisplay: "Cloud (Versionless)",
},
- {
- version: "1.9",
- isPrerelease: true,
- },
{
version: "1.8",
EOLDate: "2025-04-15",
diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-semantic-structure.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-semantic-structure.md
index 295d86e9c20..5bfbea82dda 100644
--- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-semantic-structure.md
+++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-semantic-structure.md
@@ -20,6 +20,10 @@ The first thing you need to establish is how you’re going to consistently stru
It’s not terribly difficult to shift between these (it can be done with some relatively straightforward shell scripting), and this is purely a decision based on your developers’ preference (i.e. it has no impact on execution or performance), so don’t feel locked in to either path. Just pick the one that feels right and you can always shift down the road if you change your mind.
+:::tip
+Make sure to save all semantic models and metrics under the directory defined in the [`model-paths`](/reference/project-configs/model-paths) (or a subdirectory of it, like `models/semantic_models/`). If you save them outside of this path, it will result in an empty `semantic_manifest.json` file, and your semantic models or metrics won't be recognized.
+:::
+
## Naming
Next, establish your system for consistent file naming:
diff --git a/website/docs/docs/build/hooks-operations.md b/website/docs/docs/build/hooks-operations.md
index 9ed20291c34..6cec2a673c0 100644
--- a/website/docs/docs/build/hooks-operations.md
+++ b/website/docs/docs/build/hooks-operations.md
@@ -72,6 +72,41 @@ You can use hooks to provide database-specific functionality not available out-o
You can also use a [macro](/docs/build/jinja-macros#macros) to bundle up hook logic. Check out some of the examples in the reference sections for [on-run-start and on-run-end hooks](/reference/project-configs/on-run-start-on-run-end) and [pre- and post-hooks](/reference/resource-configs/pre-hook-post-hook).
+
+
+```sql
+{{ config(
+ pre_hook=[
+ "{{ some_macro() }}"
+ ]
+) }}
+```
+
+
+
+
+
+```yaml
+models:
+ - name:
+ config:
+ pre_hook:
+ - "{{ some_macro() }}"
+```
+
+
+
+
+
+```yaml
+models:
+ :
+ +pre-hook:
+ - "{{ some_macro() }}"
+```
+
+
+
## About operations
Operations are [macros](/docs/build/jinja-macros#macros) that you can run using the [`run-operation`](/reference/commands/run-operation) command. As such, operations aren't actually a separate resource in your dbt project — they are just a convenient way to invoke a macro without needing to run a model.
diff --git a/website/docs/docs/build/jinja-macros.md b/website/docs/docs/build/jinja-macros.md
index fc4a0cad3e8..bc91e3674c9 100644
--- a/website/docs/docs/build/jinja-macros.md
+++ b/website/docs/docs/build/jinja-macros.md
@@ -74,7 +74,7 @@ group by 1
You can recognize Jinja based on the delimiters the language uses, which we refer to as "curlies":
- **Expressions `{{ ... }}`**: Expressions are used when you want to output a string. You can use expressions to reference [variables](/reference/dbt-jinja-functions/var) and call [macros](/docs/build/jinja-macros#macros).
- **Statements `{% ... %}`**: Statements don't output a string. They are used for control flow, for example, to set up `for` loops and `if` statements, to [set](https://jinja.palletsprojects.com/en/3.1.x/templates/#assignments) or [modify](https://jinja.palletsprojects.com/en/3.1.x/templates/#expression-statement) variables, or to define macros.
-- **Comments `{# ... #}`**: Jinja comments are used to prevent the text within the comment from executing or outputing a string.
+- **Comments `{# ... #}`**: Jinja comments are used to prevent the text within the comment from executing or outputing a string. Don't use `--` for comment.
When used in a dbt model, your Jinja needs to compile to a valid query. To check what SQL your Jinja compiles to:
* **Using dbt Cloud:** Click the compile button to see the compiled SQL in the Compiled SQL pane
diff --git a/website/docs/docs/cloud-integrations/avail-sl-integrations.md b/website/docs/docs/cloud-integrations/avail-sl-integrations.md
index eea93c92b93..04d9d55acb4 100644
--- a/website/docs/docs/cloud-integrations/avail-sl-integrations.md
+++ b/website/docs/docs/cloud-integrations/avail-sl-integrations.md
@@ -20,7 +20,7 @@ import AvailIntegrations from '/snippets/_sl-partner-links.md';
### Custom integration
- [Exports](/docs/use-dbt-semantic-layer/exports) enable custom integration with additional tools that don't natively connect with the dbt Semantic Layer, such as PowerBI.
-- Develop custom integrations using different languages and tools, supported through JDBC, ADBC, and GraphQL APIs. For more info, check out [our examples on GitHub](https://github.com/dbt-labs/example-semantic-layer-clients/).
+- [Consume metrics](/docs/use-dbt-semantic-layer/consume-metrics) and develop custom integrations using different languages and tools, supported through [JDBC](/docs/dbt-cloud-apis/sl-jdbc), ADBC, and [GraphQL](/docs/dbt-cloud-apis/sl-graphql) APIs, and [Python SDK library](/docs/dbt-cloud-apis/sl-python). For more info, check out [our examples on GitHub](https://github.com/dbt-labs/example-semantic-layer-clients/).
- Connect to any tool that supports SQL queries. These tools must meet one of the two criteria:
- Offers a generic JDBC driver option (such as DataGrip) or
- Is compatible Arrow Flight SQL JDBC driver version 12.0.0 or higher.
diff --git a/website/docs/docs/cloud/about-cloud/browsers.md b/website/docs/docs/cloud/about-cloud/browsers.md
index 12665bc7b72..1e26d3a6d59 100644
--- a/website/docs/docs/cloud/about-cloud/browsers.md
+++ b/website/docs/docs/cloud/about-cloud/browsers.md
@@ -27,4 +27,4 @@ To improve your experience using dbt Cloud, we suggest that you turn off ad bloc
A session is a period of time during which you’re signed in to a dbt Cloud account from a browser. If you close your browser, it will end your session and log you out. You'll need to log in again the next time you try to access dbt Cloud.
-If you've logged in using [SSO](/docs/cloud/manage-access/sso-overview) or [OAuth](/docs/cloud/git/connect-github#personally-authenticate-with-github), you can customize your maximum session duration, which might vary depending on your identity provider (IdP).
+If you've logged in using [SSO](/docs/cloud/manage-access/sso-overview), you can customize your maximum session duration, which might vary depending on your identity provider (IdP).
diff --git a/website/docs/docs/cloud/git/connect-github.md b/website/docs/docs/cloud/git/connect-github.md
index 4dc4aaf73e9..f230f70e1f6 100644
--- a/website/docs/docs/cloud/git/connect-github.md
+++ b/website/docs/docs/cloud/git/connect-github.md
@@ -7,7 +7,6 @@ sidebar_label: "Connect to GitHub"
Connecting your GitHub account to dbt Cloud provides convenience and another layer of security to dbt Cloud:
-- Log into dbt Cloud using OAuth through GitHub.
- Import new GitHub repositories with a couple clicks during dbt Cloud project setup.
- Clone repos using HTTPS rather than SSH.
- Trigger [Continuous integration](/docs/deploy/continuous-integration)(CI) builds when pull requests are opened in GitHub.
@@ -48,15 +47,15 @@ To connect your dbt Cloud account to your GitHub account:
- Read and write access to Workflows
6. Once you grant access to the app, you will be redirected back to dbt Cloud and shown a linked account success state. You are now personally authenticated.
-7. Ask your team members to [personally authenticate](/docs/cloud/git/connect-github#personally-authenticate-with-github) by connecting their GitHub profiles.
+7. Ask your team members to individually authenticate by connecting their [personal GitHub profiles](#authenticate-your-personal-github-account).
## Limiting repository access in GitHub
If you are your GitHub organization owner, you can also configure the dbt Cloud GitHub application to have access to only select repositories. This configuration must be done in GitHub, but we provide an easy link in dbt Cloud to start this process.
-## Personally authenticate with GitHub
+## Authenticate your personal GitHub account
-Once the dbt Cloud admin has [set up a connection](/docs/cloud/git/connect-github#installing-dbt-cloud-in-your-github-account) to your organization GitHub account, you need to personally authenticate, which improves the security of dbt Cloud by enabling you to log in using OAuth through GitHub.
+After the dbt Cloud administrator [sets up a connection](/docs/cloud/git/connect-github#installing-dbt-cloud-in-your-github-account) to your organization's GitHub account, you need to authenticate using your personal account. You must connect your personal GitHub profile to dbt Cloud to use the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) and [CLI](/docs/cloud/cloud-cli-installation) and verify your read and write access to the repository.
:::info GitHub profile connection
@@ -77,7 +76,7 @@ To connect a personal GitHub account:
4. Once you approve authorization, you will be redirected to dbt Cloud, and you should now see your connected account.
-The next time you log into dbt Cloud, you will be able to do so via OAuth through GitHub, and if you're on the Enterprise plan, you're ready to use the dbt Cloud IDE or dbt Cloud CLI.
+You can now use the dbt Cloud IDE or dbt Cloud CLI.
## FAQs
diff --git a/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md b/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md
index 3b3b9c2d870..e9c4236438e 100644
--- a/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md
+++ b/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md
@@ -43,7 +43,7 @@ CREATE OR REPLACE SECURITY INTEGRATION DBT_CLOUD
ENABLED = TRUE
OAUTH_CLIENT = CUSTOM
OAUTH_CLIENT_TYPE = 'CONFIDENTIAL'
- OAUTH_REDIRECT_URI = LOCATED_REDIRECT_URI
+ OAUTH_REDIRECT_URI = 'LOCATED_REDIRECT_URI'
OAUTH_ISSUE_REFRESH_TOKENS = TRUE
OAUTH_REFRESH_TOKEN_VALIDITY = 7776000;
```
diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md
index 8bdf47eae5a..3aec1956297 100644
--- a/website/docs/docs/cloud/migration.md
+++ b/website/docs/docs/cloud/migration.md
@@ -7,34 +7,45 @@ pagination_next: null
pagination_prev: null
---
-dbt Labs is in the process of migrating dbt Cloud to a new _cell-based architecture_. This architecture will be the foundation of dbt Cloud for years to come, and will bring improved scalability, reliability, and security to all customers and users of dbt Cloud.
+dbt Labs is in the process of rolling out a new cell-based architecture for dbt Cloud. This architecture provides the foundation of dbt Cloud for years to come, and brings improved reliability, performance, and consistency to users of dbt Cloud.
-There is some preparation required to ensure a successful migration.
+We're scheduling migrations by account. When we're ready to migrate your account, you will receive a banner or email communication with your migration date. If you have not received this communication, then you don't need to take action at this time. dbt Labs will share information about your migration with you, with appropriate advance notice, when applicable to your account.
-Migrations are being scheduled on a per-account basis. _If you haven't received any communication (either with a banner or by email) about a migration date, you don't need to take any action at this time._ dbt Labs will share migration date information with you, with appropriate advance notice, before we complete any migration steps in the dbt Cloud backend.
+Your account will be automatically migrated on its scheduled date. However, if you use certain features, you must take action before that date to avoid service disruptions.
-This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access.
+## Recommended actions
-## Pre-migration checklist
+We highly recommended you take these actions:
-Prior to your migration date, your dbt Cloud account admin will need to make some changes to your account. Most of your configurations will be migrated automatically, but a few will require manual intervention.
+- Ensure pending user invitations are accepted or note outstanding invitations. Pending user invitations will be voided during the migration and must be resent after it is complete.
+- Commit unsaved changes in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud). Unsaved changes will be lost during migration.
+- Export and download [audit logs](/docs/cloud/manage-access/audit-log) older than 90 days, as they will be lost during migration. If you lose critical logs older than 90 days during the migration, you will have to work with the dbt Labs Customer Support team to recover.
-If your account is scheduled for migration, you will see a banner indicating your migration date when you log in. If you don't see a banner, you don't need to take any action.
+## Required actions
-1. **IP addresses** — dbt Cloud will be using new IPs to access your warehouse after the migration. Make sure to allow inbound traffic from these IPs in your firewall and include it in any database grants. All six of the IPs below should be added to allowlists.
- * Old IPs: `52.45.144.63`, `54.81.134.249`, `52.22.161.231`
- * New IPs: `52.3.77.232`, `3.214.191.130`, `34.233.79.135`
-2. **User invitations** — Any pending user invitations will be invalidated during the migration. You can resend the invitations after the migration is complete.
-3. **SSO integrations** — If you've completed the Auth0 migration, your account SSO configurations will be automatically transferred. If you haven't completed the Auth0 migration, dbt Labs recommends doing that before starting the mult-cell migration to avoid service disruptions.
-4. **IDE sessions** — Any unsaved changes in the IDE might be lost during migration. dbt Labs _strongly_ recommends committing all changes in the IDE before your scheduled migration time.
+These actions are required to prevent users from losing access dbt Cloud:
-## Post-migration
+- If you still need to, complete [Auth0 migration for SSO](/docs/cloud/manage-access/auth0-migration) before your scheduled migration date to avoid service disruptions. If you've completed the Auth0 migration, your account SSO configurations will be transferred automatically.
+- Update your IP allow lists. dbt Cloud will be using new IPs to access your warehouse post-migration. Allow inbound traffic from all of the following new IPs in your firewall and include them in any database grants:
-After migration, if you completed all the [Pre-migration checklist](#pre-migration-checklist) items, your dbt Cloud resources and jobs will continue to work as they did before.
+ - `52.3.77.232`
+ - `3.214.191.130`
+ - `34.233.79.135`
-You have the option to log in to dbt Cloud at a different URL:
- * If you were previously logging in at `cloud.getdbt.com`, you should instead plan to login at `us1.dbt.com`. The original URL will still work, but you’ll have to click through to be redirected upon login.
- * You may also log in directly with your account’s unique [access URL](/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account).
+ Keep the old dbt Cloud IPs listed until the migration is complete.
-:::info Login with GitHub
-Users who previously used the "Login with GitHub" functionality will no longer be able to use this method to login to dbt Cloud after migration. To continue accessing your account, you can use your existing email and password.
+## Post-migration
+
+Complete all of these items to ensure your dbt Cloud resources and jobs will continue working without interruption.
+
+Use one of these two URL login options:
+
+- `us1.dbt.com.` If you were previously logging in at `cloud.getdbt.com`, you should instead plan to log in at us1.dbt.com. The original URL will still work, but you’ll have to click through to be redirected upon login.
+- `ACCOUNT_PREFIX.us1.dbt.com`: A unique URL specifically for your account. If you belong to multiple accounts, each will have a unique URL available as long as they have been migrated to multi-cell.
+Check out [access, regions, and IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) for more information.
+
+Remove the following old IP addresses from your firewall and database grants:
+
+- `52.45.144.63`
+- `54.81.134.249`
+- `52.22.161.231`
diff --git a/website/docs/docs/cloud/secure/ip-restrictions.md b/website/docs/docs/cloud/secure/ip-restrictions.md
index 034b3a6c144..d39960dab42 100644
--- a/website/docs/docs/cloud/secure/ip-restrictions.md
+++ b/website/docs/docs/cloud/secure/ip-restrictions.md
@@ -13,7 +13,7 @@ import SetUpPages from '/snippets/_available-tiers-iprestrictions.md';
IP Restrictions help control which IP addresses are allowed to connect to dbt Cloud. IP restrictions allow dbt Cloud customers to meet security and compliance controls by only allowing approved IPs to connect to their dbt Cloud environment. This feature is supported in all regions across NA, Europe, and Asia-Pacific, but contact us if you have questions about availability.
-## Configuring IP Restrictions
+## Configuring IP restrictions
To configure IP restrictions, go to **Account Settings** → **IP Restrictions**. IP restrictions provide two methods for determining which IPs can access dbt Cloud: an allowlist and a blocklist. IPs in the allowlist are allowed to access dbt Cloud, and IPs in the deny list will be blocked from accessing dbt Cloud. IP Restrictions can be used for a range of use cases, including:
@@ -29,7 +29,7 @@ For any version control system integrations (Github, Gitlab, ADO, etc.) inbound
To add an IP to the allowlist, from the **IP Restrictions** page:
-1. Click **edit**
+1. Click **Edit**
2. Click **Add Rule**
3. Add name and description for the rule
- For example, Corporate VPN CIDR Range
@@ -39,7 +39,9 @@ To add an IP to the allowlist, from the **IP Restrictions** page:
- You can add multiple ranges in the same rule.
6. Click **Save**
-Note that simply adding the IP Ranges will not enforce IP restrictions. For more information, see the section “Enabling Restrictions.”
+Add multiple IP ranges by clicking the **Add IP range** button to create a new text field.
+
+Note that simply adding the IP Ranges will not enforce IP restrictions. For more information, see the [Enabling restrictions](#enabling-restrictions) section.
If you only want to allow the IP ranges added to this list and deny all other requests, adding a denylist is not necessary. By default, if only an allow list is added, dbt Cloud will only allow IPs in the allowable range and deny all other IPs. However, you can add a denylist if you want to deny specific IP addresses within your allowlist CIDR range.
@@ -65,9 +67,9 @@ It is possible to put an IP range on one list and then a sub-range or IP address
:::
-## Enabling Restrictions
+## Enabling restrictions
-Once you are done adding all your ranges, IP restrictions can be enabled by selecting the **Enable IP restrictions** button and clicking **Save**. If your IP address is in any of the denylist ranges, you won’t be able to save or enable IP restrictions - this is done to prevent accidental account lockouts. If you do get locked out due to IP changes on your end, please reach out to support@dbtlabs.com
+Once you are done adding all your ranges, IP restrictions can be enabled by selecting the **Enable IP restrictions** button and clicking **Save**. If your IP address is in any of the denylist ranges, you won’t be able to save or enable IP restrictions - this is done to prevent accidental account lockouts. If you do get locked out due to IP changes on your end, please reach out to support@getdbt.com
Once enabled, when someone attempts to access dbt Cloud from a restricted IP, they will encounter one of the following messages depending on whether they use email & password or SSO login.
diff --git a/website/docs/docs/collaborate/data-tile.md b/website/docs/docs/collaborate/data-tile.md
index f40f21ebe18..446922acb92 100644
--- a/website/docs/docs/collaborate/data-tile.md
+++ b/website/docs/docs/collaborate/data-tile.md
@@ -9,9 +9,11 @@ image: /img/docs/collaborate/dbt-explorer/data-tile-pass.jpg
# Embed data health tile in dashboards
With data health tiles, stakeholders will get an at-a-glance confirmation on whether the data they’re looking at is stale or degraded. This trust signal allows teams to immediately go back into Explorer to see more details and investigate issues.
+
:::info Available in beta
Data health tile is currently available in open beta.
:::
+
The data health tile:
- Distills trust signals for data consumers.
@@ -19,6 +21,8 @@ The data health tile:
- Provides richer information and makes it easier to debug.
- Revamps the existing, [job-based tiles](#job-based-data-health).
+Data health tiles rely on [exposures](/docs/build/exposures) to surface trust signals in your dashboards. When you configure exposures in your dbt project, you are explicitly defining how specific outputs—like dashboards or reports—depend on your data models.
+
## Prerequisites
@@ -34,43 +38,45 @@ First, be sure to enable [source freshness](/docs/deploy/source-freshness) in
1. Navigate to dbt Explorer by clicking on the **Explore** link in the navigation.
2. In the main **Overview** page, go to the left navigation.
-3. Under the **Resources** tab, click on **Exposures** to view the exposures list.
+3. Under the **Resources** tab, click on **Exposures** to view the [exposures](/docs/build/exposures) list.
4. Select a dashboard exposure and go to the **General** tab to view the data health information.
-5. In this tab, you’ll see:
- - Data health status: Data freshness passed, Data quality passed, Data may be stale, Data quality degraded
- - Name of the exposure.
+5. In this tab, you’ll see:
+ - Name of the exposure.
+ - Data health status: Data freshness passed, Data quality passed, Data may be stale, Data quality degraded.
- Resource type (model, source, and so on).
- Dashboard status: Failure, Pass, Stale.
- You can also see the last check completed, the last check time, and the last check duration.
-6. You can also click the **Open Dashboard** button on the upper right to immediately view this in your analytics tool.
+6. You can click the **Open Dashboard** button on the upper right to immediately view this in your analytics tool.
## Embed in your dashboard
-Once you’ve navigated to the auto-exposure in dbt Explorer, you’ll need to set up your dashboard status tile and [service token](/docs/dbt-cloud-apis/service-tokens):
+Once you’ve navigated to the auto-exposure in dbt Explorer, you’ll need to set up your data health tile and [service token](/docs/dbt-cloud-apis/service-tokens). You can embed data health tile to any analytics tool that supports URL or iFrame embedding.
+
+Follow these steps to set up your data health tile:
1. Go to **Account settings** in dbt Cloud.
2. Select **API tokens** in the left sidebar and then **Service tokens**.
3. Click on **Create service token** and give it a name.
-4. Select the [**Metadata Only** permission](/docs/dbt-cloud-apis/service-tokens). This token will be used to embed the exposure tile in your dashboard in the later steps.
+4. Select the [**Metadata Only**](/docs/dbt-cloud-apis/service-tokens) permission. This token will be used to embed the tile in your dashboard in the later steps.
-5. Copy the **Metadata Only token** and save it in a secure location. You'll need it token in the next steps.
+5. Copy the **Metadata Only** token and save it in a secure location. You'll need it token in the next steps.
6. Navigate back to dbt Explorer and select an exposure.
7. Below the **Data health** section, expand on the toggle for instructions on how to embed the exposure tile (if you're an account admin with develop permissions).
8. In the expanded toggle, you'll see a text field where you can paste your **Metadata Only token**.
-9. Once you’ve pasted your token, you can select either **URL** or **iFrame** depending on which you need to install into your dashboard.
+9. Once you’ve pasted your token, you can select either **URL** or **iFrame** depending on which you need to add to your dashboard.
If your analytics tool supports iFrames, you can embed the dashboard tile within it.
-### Embed data health tile in Tableau
-To embed the data health tile in Tableau, follow these steps:
+#### Tableau example
+Here’s an example with Tableau, where you can embed the iFrame in a web page object:
-1. Ensure you've copied the embed iFrame content in dbt Explorer.
-2. For the revamped environment-based exposure tile you can insert these fields into the following iFrame, and then embed them with your dashboard. This is the iFrame that is available from the **Exposure details** page in dbt Explorer.
+- Ensure you've copied the embed iFrame snippet from the dbt Explorer **Data health** section.
+- **For the revamped environment-based exposure tile** — Insert the following fields into the following iFrame. Then embed them with your dashboard. This is the iFrame available from the **Exposure details** page in dbt Explorer.
``
@@ -82,7 +88,7 @@ To embed the data health tile in Tableau, follow these steps:
-3. For the job-based exposure tile you can insert these three fields into the following iFrame, and then embed them with your dashboard. The next section will have more details on the job-based exposure tile.
+- **For job-based exposure tile** — Insert the following fields into the following iFrame. Then embed them with your dashboard. The next [section](#job-based-data-health) will have more details on the job-based exposure tile.
``
diff --git a/website/docs/docs/collaborate/explore-projects.md b/website/docs/docs/collaborate/explore-projects.md
index e60d019bf2e..1c469409e4f 100644
--- a/website/docs/docs/collaborate/explore-projects.md
+++ b/website/docs/docs/collaborate/explore-projects.md
@@ -29,7 +29,7 @@ Navigate the dbt Explorer overview page to access your project's resources and m
- **Lineage graph** — Explore your project's or account's [lineage graph](#project-lineage) to visualize the relationships between resources.
- **Latest updates** — View the latest changes or issues related to your project's resources, including the most recent job runs, changed properties, lineage, and issues.
- **Marts and public models** — View the [marts](/best-practices/how-we-structure/1-guide-overview#guide-structure-overview) and [public models](/docs/collaborate/govern/model-access#access-modifiers) in your project.
-- **Model query history** — Use [model query history](/docs/collaborate/model-query-history) to track the history of queries on your models for deeper insights.
+- **Model query history** — Use [model query history](/docs/collaborate/model-query-history) to track consumption queries on your models for deeper insights.
- **Auto-exposures** — [Set up and view auto-exposures](/docs/collaborate/auto-exposures) to automatically expose relevant data models from Tableau to enhance visibility.
diff --git a/website/docs/docs/collaborate/model-query-history.md b/website/docs/docs/collaborate/model-query-history.md
index ee7695e3ab9..d8e08bf63da 100644
--- a/website/docs/docs/collaborate/model-query-history.md
+++ b/website/docs/docs/collaborate/model-query-history.md
@@ -7,14 +7,18 @@ image: /img/docs/collaborate/dbt-explorer/model-query-queried-models.jpg
# About model query history
-The model query history tile allows you to:
+Model query history allows you to:
-- View the query count for a model based on the data warehouse's query logs.
+- View the count of consumption queries for a model based on the data warehouse's query logs.
- Provides data teams insight, so they can focus their time and infrastructure spend on the worthwhile used data products.
- Enable analysts to find the most popular models used by other people.
-:::info Available in beta
-Model query history is powered by a single query of the query log table in your data warehouse aggregated on a daily basis. It filters down to `select` statements only to gauge model consumption and excludes dbt model build and test executions.
+Model query history is powered by a single consumption query of the query log table in your data warehouse aggregated on a daily basis.
+
+:::info What is a consumption query?
+Consumption query is a metric of queries in your dbt project that has used the model in a given time. It filters down to `select` statements only to gauge model consumption and excludes dbt model build and test executions.
+
+So for example, if `model_super_santi` was queried 10 times in the past week, it would count as having 10 consumption queries for that particular time period.
:::
## Prerequisites
@@ -72,31 +76,35 @@ During beta, the dbt Labs team will manually enable query history for your dbt C
## View query history in Explorer
-To enhance your discovery, you can view your model query history in various locations within dbt Explorer. For details on how to access model query history in each location, expand the following toggles:
+To enhance your discovery, you can view your model query history in various locations within dbt Explorer:
+- [View from Performance charts](#view-from-performance-charts)
+* [View from Project lineage](#view-from-project-lineage)
+- [View from Model list](#view-from-model-list)
### View from Performance charts
1. Navigate to dbt Explorer by clicking on the **Explore** link in the navigation.
-2. In the main **Overview** page, under **Project** click **Performance** and scroll down to view the most queried models
+2. In the main **Overview** page, click on **Performance** under the **Project details** section. Scroll down to view the **Most consumed models**.
3. Use the dropdown menu on the right to select the desired time period, with options available for up to the past 3 months.
-
+
-4. In the model performance tab, open the **Usage** chart to see queries over time for that model.
-
+4. Click on a model for more details and go to the **Performance** tab.
+5. On the **Performance** tab, scroll down to the **Model performance** section.
+6. Select the **Consumption queries** tab to view the consumption queries over a given time for that model.
+
### View from Project lineage
1. To view your model in your project lineage, go to the main **Overview page** and click on **Project lineage.**
-2. In the lower left of your lineage, click on **Lenses** and select **Usage queries**.
-
+2. In the lower left of your lineage, click on **Lenses** and select **Consumption queries**.
+
-3. Your lineage should display a small red box above each model, indicating the usage query number for each model. The query number for each model represents the query history over the last 30 days.
+3. Your lineage should display a small red box above each model, indicating the consumption query number. The number for each model represents the model consumption over the last 30 days.
### View from Model list
-1. To view your model in your project lineage, go to the main **Overview page**.
+1. To view a list of models, go to the main **Overview page**.
2. In the left navigation, go to the **Resources** tab and click on **Models** to view the models list.
-3. You can view the usage query count for the models and sort by most or least queried. The query number for each model represents the query history over the last 30 days.
-
-
+3. You can view the consumption query count for the models and sort by most or least consumed. The consumption query number for each model represents the consumption over the last 30 days.
+
diff --git a/website/docs/docs/core/connect-data-platform/postgres-setup.md b/website/docs/docs/core/connect-data-platform/postgres-setup.md
index 7720e82844d..b6f34a00e0b 100644
--- a/website/docs/docs/core/connect-data-platform/postgres-setup.md
+++ b/website/docs/docs/core/connect-data-platform/postgres-setup.md
@@ -5,7 +5,7 @@ id: "postgres-setup"
meta:
maintained_by: dbt Labs
authors: 'core dbt maintainers'
- github_repo: 'dbt-labs/dbt-core'
+ github_repo: 'dbt-labs/dbt-postgres'
pypi_package: 'dbt-postgres'
min_core_version: 'v0.4.0'
cloud_support: Supported
diff --git a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
deleted file mode 100644
index 544590b18df..00000000000
--- a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title: "Upgrading to v1.9 (beta)"
-id: upgrading-to-v1.9
-description: New features and changes in dbt Core v1.9
-displayed_sidebar: "docs"
----
-
-## Resources
-
-- Changelog (coming soon)
-- [dbt Core CLI Installation guide](/docs/core/installation-overview)
-- [Cloud upgrade guide](/docs/dbt-versions/upgrade-dbt-version-in-cloud) — dbt Cloud is now versionless. dbt v1.9 will not appear in the version dropdown. Select **Versionless** to get all the latest features and functionality in your dbt Cloud account.
-
-## What to know before upgrading
-
-dbt Labs is committed to providing backward compatibility for all versions 1.x, except for any changes explicitly mentioned on this page. If you encounter an error upon upgrading, please let us know by [opening an issue](https://github.com/dbt-labs/dbt-core/issues/new).
-
-
-## New and changed features and functionality
-
-Features and functionality new in dbt v1.9.
-
-**Coming soon**
-
-## Quick hits
-
-**Coming soon**
\ No newline at end of file
diff --git a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md
index dd22329668c..9163047e7e0 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md
@@ -98,13 +98,13 @@ The ability for installed packages to override built-in materializations without
### Managing changes to legacy behaviors
-dbt Core v1.8 has introduced flags for [managing changes to legacy behaviors](/reference/global-configs/legacy-behaviors). You may opt into recently introduced changes (disabled by default), or opt out of mature changes (enabled by default), by setting `True` / `False` values, respectively, for `flags` in `dbt_project.yml`.
+dbt Core v1.8 has introduced flags for [managing changes to legacy behaviors](/reference/global-configs/behavior-changes). You may opt into recently introduced changes (disabled by default), or opt out of mature changes (enabled by default), by setting `True` / `False` values, respectively, for `flags` in `dbt_project.yml`.
You can read more about each of these behavior changes in the following links:
-- (Mature, enabled by default) [Require explicit package overrides for builtin materializations](/reference/global-configs/legacy-behaviors#require_explicit_package_overrides_for_builtin_materializations)
-- (Introduced, disabled by default) [Require resource names without spaces](https://docs.getdbt.com/reference/global-configs/legacy-behaviors#require_resource_names_without_spaces)
-- (Introduced, disabled by default) [Run project hooks (`on-run-*`) in the `dbt source freshness` command](/reference/global-configs/legacy-behaviors#source_freshness_run_project_hooks)
+- (Mature, enabled by default) [Require explicit package overrides for builtin materializations](/reference/global-configs/behavior-changes#require_explicit_package_overrides_for_builtin_materializations)
+- (Introduced, disabled by default) [Require resource names without spaces](/reference/global-configs/behavior-changes#require_resource_names_without_spaces)
+- (Introduced, disabled by default) [Run project hooks (`on-run-*`) in the `dbt source freshness` command](/reference/global-configs/behavior-changes#source_freshness_run_project_hooks)
## Quick hits
diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md
index a9db34334ad..e969e10adc6 100644
--- a/website/docs/docs/dbt-versions/release-notes.md
+++ b/website/docs/docs/dbt-versions/release-notes.md
@@ -20,8 +20,11 @@ Release notes are grouped by month for both multi-tenant and virtual private clo
## August 2024
- **New**: You can now configure metrics at granularities at finer time grains, such as hour, minute, or even by the second. This is particularly useful for more detailed analysis and for datasets where high-resolution time data is required, such as minute-by-minute event tracking. Refer to [dimensions](/docs/build/dimensions) for more information about time granularity.
+- **Enhancement**: Microsoft Excel now supports [saved selections](/docs/cloud-integrations/semantic-layer/excel#using-saved-selections) and [saved queries](/docs/cloud-integrations/semantic-layer/excel#using-saved-queries). Use Saved selections to save your query selections within the Excel application. The application also clears stale data in [trailing rows](/docs/cloud-integrations/semantic-layer/excel#other-settings) by default. To return your results and keep any previously selected data intact, un-select the **Clear trailing rows** option.
+- **Behavior change:** GitHub is no longer supported for OAuth login to dbt Cloud. Use a supported [SSO or OAuth provider](/docs/cloud/manage-access/sso-overview) to securely manage access to your dbt Cloud account.
## July 2024
+- **Behavior change:** `target_schema` is no longer a required configuration for [snapshots](/docs/build/snapshots). You can now target different schemas for snapshots across development and deployment environments using the [schema config](/reference/resource-configs/schema).
- **New:** [Connections](/docs/cloud/connect-data-platform/about-connections#connection-management) are now available under **Account settings** as a global setting. Previously, they were found under **Project settings**. This is being rolled out in phases over the coming weeks.
- **New:** Admins can now assign [environment-level permissions](/docs/cloud/manage-access/environment-permissions) to groups for specific roles.
- **New:** [Merge jobs](/docs/deploy/merge-jobs) for implementing [continuous deployment (CD)](/docs/deploy/continuous-deployment) workflows are now GA in dbt Cloud. Previously, you had to either set up a custom GitHub action or manually build the changes every time a pull request is merged.
@@ -147,7 +150,7 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
-- **Behavior change:** Introduced the `require_resource_names_without_spaces` flag, opt-in and disabled by default. If set to `True`, dbt will raise an exception if it finds a resource name containing a space in your project or an installed package. This will become the default in a future version of dbt. Read [No spaces in resource names](/reference/global-configs/legacy-behaviors#no-spaces-in-resource-names) for more information.
+- **Behavior change:** Introduced the `require_resource_names_without_spaces` flag, opt-in and disabled by default. If set to `True`, dbt will raise an exception if it finds a resource name containing a space in your project or an installed package. This will become the default in a future version of dbt. Read [No spaces in resource names](/reference/global-configs/behavior-changes#no-spaces-in-resource-names) for more information.
## April 2024
@@ -159,7 +162,7 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
-- **Behavior change:** Introduced the `require_explicit_package_overrides_for_builtin_materializations` flag, opt-in and disabled by default. If set to `True`, dbt will only use built-in materializations defined in the root project or within dbt, rather than implementations in packages. This will become the default in May 2024 (dbt Core v1.8 and "Versionless" dbt Cloud). Read [Package override for built-in materialization](/reference/global-configs/legacy-behaviors#package-override-for-built-in-materialization) for more information.
+- **Behavior change:** Introduced the `require_explicit_package_overrides_for_builtin_materializations` flag, opt-in and disabled by default. If set to `True`, dbt will only use built-in materializations defined in the root project or within dbt, rather than implementations in packages. This will become the default in May 2024 (dbt Core v1.8 and "Versionless" dbt Cloud). Read [Package override for built-in materialization](/reference/global-configs/behavior-changes#package-override-for-built-in-materialization) for more information.
**dbt Semantic Layer**
- **New**: Use Saved selections to [save your query selections](/docs/cloud-integrations/semantic-layer/gsheets#using-saved-selections) within the [Google Sheets application](/docs/cloud-integrations/semantic-layer/gsheets). They can be made private or public and refresh upon loading.
@@ -181,7 +184,7 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
- **Fix:** `dbt parse` no longer shows an error when you use a list of filters (instead of just a string filter) on a metric.
- **Fix:** `join_to_timespine` now properly gets applied to conversion metric input measures.
- **Fix:** Fixed an issue where exports in Redshift were not always committing to the DWH, which also had the side-effect of leaving table locks open.
-- **Behavior change:** Introduced the `source_freshness_run_project_hooks` flag, opt-in and disabled by default. If set to `True`, dbt will include `on-run-*` project hooks in the `source freshness` command. This will become the default in a future version of dbt. Read [Project hooks with source freshness](/reference/global-configs/legacy-behaviors#project-hooks-with-source-freshness) for more information.
+- **Behavior change:** Introduced the `source_freshness_run_project_hooks` flag, opt-in and disabled by default. If set to `True`, dbt will include `on-run-*` project hooks in the `source freshness` command. This will become the default in a future version of dbt. Read [Project hooks with source freshness](/reference/global-configs/behavior-changes#project-hooks-with-source-freshness) for more information.
## February 2024
diff --git a/website/docs/docs/use-dbt-semantic-layer/consume-metrics.md b/website/docs/docs/use-dbt-semantic-layer/consume-metrics.md
new file mode 100644
index 00000000000..c55b4bcb632
--- /dev/null
+++ b/website/docs/docs/use-dbt-semantic-layer/consume-metrics.md
@@ -0,0 +1,38 @@
+---
+title: "Consume metrics from your Semantic Layer"
+description: "Learn how to query and consume metrics from your deployed dbt Semantic Layer using various tools and APIs."
+sidebar_label: "Consume your metrics"
+tags: [Semantic Layer]
+pagination_next: "docs/use-dbt-semantic-layer/sl-faqs"
+---
+
+After [deploying](/docs/use-dbt-semantic-layer/deploy-sl) your dbt Semantic Layer, the next important (and fun!) step is querying and consuming the metrics you’ve defined. This page links to key resources that guide you through the process of consuming metrics across different integrations, APIs, and tools, using various different [query syntaxes](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata).
+
+Once your Semantic Layer is deployed, you can start querying your metrics using a variety of tools and APIs. Here are the main resources to get you started:
+
+### Available integrations
+
+Integrate the dbt Semantic Layer with a variety of business intelligence (BI) tools and data platforms, enabling seamless metric queries within your existing workflows. Explore the following integrations:
+
+- [Available integrations](/docs/cloud-integrations/avail-sl-integrations) — Review a wide range of partners such as Tableau, Google Sheets, Microsoft Excel, and more, where you can query your metrics directly from the dbt Semantic Layer.
+
+### Query with APIs
+
+To leverage the full power of the dbt Semantic Layer, you can use the dbt Semantic Layer APIs for querying metrics programmatically:
+- [dbt Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview) — Learn how to use the dbt Semantic Layer APIs to query metrics in downstream tools, ensuring consistent and reliable data metrics.
+ - [JDBC API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) — Dive into the syntax for querying metrics with the JDBC API, with examples and detailed instructions.
+ - [GraphQL API query syntax](/docs/dbt-cloud-apis/sl-graphql#querying) — Learn the syntax for querying metrics via the GraphQL API, including examples and detailed instructions.
+ - [Python SDK](/docs/dbt-cloud-apis/sl-python#usage-examples) — Use the Python SDK library to query metrics programmatically with Python.
+
+### Query during development
+
+For developers working within the dbt ecosystem, it’s essential to understand how to query metrics during the development phase using MetricFlow commands:
+- [MetricFlow commands](/docs/build/metricflow-commands) — Learn how to use MetricFlow commands to query metrics directly during the development process, ensuring your metrics are correctly defined and working as expected.
+
+## Next steps
+
+After understanding the basics of querying metrics, consider optimizing your setup and ensuring the integrity of your metric definitions:
+
+- [Optimize querying performance](/docs/use-dbt-semantic-layer/sl-cache) — Improve query speed and efficiency by using declarative caching techniques.
+- [Validate semantic nodes in CI](/docs/deploy/ci-jobs#semantic-validations-in-ci) — Ensure that any changes to dbt models don’t break your metrics by validating semantic nodes in Continuous Integration (CI) jobs.
+- [Build your metrics and semantic models](/docs/build/build-metrics-intro) — If you haven’t already, learn how to define and build your metrics and semantic models using your preferred development tool.
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
index 73e39589587..e09a68b97c4 100644
--- a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
@@ -4,7 +4,7 @@ id: dbt-sl
description: "Learn how the dbt Semantic Layer enables data teams to centrally define and query metrics."
sidebar_label: "About the dbt Semantic Layer"
tags: [Semantic Layer]
-hide_table_of_contents: true
+hide_table_of_contents: false
pagination_next: "guides/sl-snowflake-qs"
pagination_prev: null
---
@@ -15,7 +15,8 @@ Moving metric definitions out of the BI layer and into the modeling layer allows
Refer to the [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs) or [Why we need a universal semantic layer](https://www.getdbt.com/blog/universal-semantic-layer/) blog post to learn more.
-## Explore the dbt Semantic Layer
+## Get started with the dbt Semantic Layer
+
import Features from '/snippets/_sl-plan-info.md'
@@ -25,54 +26,28 @@ product="dbt Semantic Layer"
plan="dbt Cloud Team or Enterprise"
/>
-
-
-
-
-
+This page points to various resources available to help you understand, configure, deploy, and integrate the dbt Semantic Layer. The following sections contain links to specific pages that explain each aspect in detail. Use these links to navigate directly to the information you need, whether you're setting up the Semantic Layer for the first time, deploying metrics, or integrating with downstream tools.
-
-
+Refer to the following resources to get started with the dbt Semantic Layer:
+- [Quickstart with the dbt Cloud Semantic Layer](/guides/sl-snowflake-qs) — Build and define metrics, set up the dbt Semantic Layer, and query them using our first-class integrations.
+- [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs) — Discover answers to frequently asked questions about the dbt Semantic Layer, such as availability, integrations, and more.
-
+## Configure the dbt Semantic Layer
-
+The following resources provide information on how to configure the dbt Semantic Layer:
+- [Set up the dbt Semantic Layer](/docs/use-dbt-semantic-layer/setup-sl) — Learn how to set up the dbt Semantic Layer in dbt Cloud using intuitive navigation.
+- [Architecture](/docs/use-dbt-semantic-layer/sl-architecture) — Explore the powerful components that make up the dbt Semantic Layer.
-
+## Deploy metrics
+This section provides information on how to deploy the dbt Semantic Layer and materialize your metrics:
+- [Deploy your Semantic Layer](/docs/use-dbt-semantic-layer/deploy-sl) — Run a dbt Cloud job to deploy the dbt Semantic Layer and materialize your metrics.
+- [Write queries with exports](/docs/use-dbt-semantic-layer/exports) — Use exports to write commonly used queries directly within your data platform, on a schedule.
+- [Cache common queries](/docs/use-dbt-semantic-layer/sl-cache) — Leverage result caching and declarative caching for common queries to speed up performance and reduce query computation.
-
+## Consume metrics and integrate
+Consume metrics and integrate the dbt Semantic Layer with downstream tools and applications:
+- [Consume metrics](/docs/use-dbt-semantic-layer/consume-metrics) — Query and consume metrics in downstream tools and applications using the dbt Semantic Layer.
+- [Available integrations](/docs/cloud-integrations/avail-sl-integrations) — Review a wide range of partners you can integrate and query with the dbt Semantic Layer.
+- [dbt Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview) — Use the dbt Semantic Layer APIs to query metrics in downstream tools for consistent, reliable data metrics.
-
diff --git a/website/docs/docs/use-dbt-semantic-layer/deploy-sl.md b/website/docs/docs/use-dbt-semantic-layer/deploy-sl.md
new file mode 100644
index 00000000000..637fa41a3c3
--- /dev/null
+++ b/website/docs/docs/use-dbt-semantic-layer/deploy-sl.md
@@ -0,0 +1,29 @@
+---
+title: "Deploy your metrics"
+id: deploy-sl
+description: "Deploy the dbt Semantic Layer in dbt Cloud by running a job to materialize your metrics."
+sidebar_label: "Deploy your metrics"
+tags: [Semantic Layer]
+pagination_next: "docs/use-dbt-semantic-layer/exports"
+---
+
+
+
+import RunProdJob from '/snippets/_sl-run-prod-job.md';
+
+
+
+## Next steps
+After you've executed a job and deployed your Semantic Layer:
+- [Set up your Semantic Layer](/docs/use-dbt-semantic-layer/setup-sl) in dbt Cloud.
+- Discover the [available integrations](/docs/cloud-integrations/avail-sl-integrations), such as Tableau, Google Sheets, Microsoft Excel, and more.
+- Start querying your metrics with the [API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata).
+
+
+## Related docs
+- [Optimize querying performance](/docs/use-dbt-semantic-layer/sl-cache) using declarative caching.
+- [Validate semantic nodes in CI](/docs/deploy/ci-jobs#semantic-validations-in-ci) to ensure code changes made to dbt models don't break these metrics.
+- If you haven't already, learn how to [build you metrics and semantic models](/docs/build/build-metrics-intro) in your development tool of choice.
diff --git a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
index adad5bd9fd1..3dfa7f3aa7d 100644
--- a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md
@@ -2,8 +2,10 @@
title: "Set up the dbt Semantic Layer"
id: setup-sl
description: "Seamlessly set up the dbt Semantic Layer in dbt Cloud using intuitive navigation."
-sidebar_label: "Set up your Semantic Layer"
+sidebar_label: "Set up the Semantic Layer"
tags: [Semantic Layer]
+pagination_next: "docs/use-dbt-semantic-layer/sl-architecture"
+pagination_prev: "guides/sl-snowflake-qs"
---
With the dbt Semantic Layer, you can centrally define business metrics, reduce code duplication and inconsistency, create self-service in downstream tools, and more.
diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
index 2062f9e405e..9239275ebdf 100644
--- a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
+++ b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
@@ -4,7 +4,6 @@ id: sl-architecture
description: "dbt Semantic Layer product architecture and related questions."
sidebar_label: "Semantic Layer architecture"
tags: [Semantic Layer]
-pagination_next: null
---
The dbt Semantic Layer allows you to define metrics and use various interfaces to query them. The Semantic Layer does the heavy lifting to find where the queried data exists in your data platform and generates the SQL to make the request (including performing joins).
diff --git a/website/docs/guides/core-cloud-2.md b/website/docs/guides/core-cloud-2.md
index 93e9e92bfa4..fcc88850b55 100644
--- a/website/docs/guides/core-cloud-2.md
+++ b/website/docs/guides/core-cloud-2.md
@@ -182,6 +182,7 @@ This guide should now have given you some insight and equipped you with a framew
+
Congratulations on finishing this guide, we hope it's given you insight into the considerations you need to take to best plan your move to dbt Cloud.
For the next steps, you can continue exploring our 3-part-guide series on moving from dbt Core to dbt Cloud:
diff --git a/website/docs/guides/custom-cicd-pipelines.md b/website/docs/guides/custom-cicd-pipelines.md
index 59a7767c69b..be23524d096 100644
--- a/website/docs/guides/custom-cicd-pipelines.md
+++ b/website/docs/guides/custom-cicd-pipelines.md
@@ -10,6 +10,9 @@ hide_table_of_contents: true
tags: ['dbt Cloud', 'Orchestration', 'CI']
level: 'Intermediate'
recently_updated: true
+search_weight: "heavy"
+keywords:
+ - bitbucket pipeline, custom pipelines, github, gitlab, azure devops, ci/cd custom pipeline
---
@@ -19,7 +22,6 @@ One of the core tenets of dbt is that analytic code should be version controlled
A note on parlance in this article since each code hosting platform uses different terms for similar concepts. The terms `pull request` (PR) and `merge request` (MR) are used interchangeably to mean the process of merging one branch into another branch.
-
### What are pipelines?
Pipelines (which are known by many names, such as workflows, actions, or build steps) are a series of pre-defined jobs that are triggered by specific events in your repository (PR created, commit pushed, branch merged, etc). Those jobs can do pretty much anything your heart desires assuming you have the proper security access and coding chops.
diff --git a/website/docs/guides/sl-snowflake-qs.md b/website/docs/guides/sl-snowflake-qs.md
index 6d9f88ab159..fb72ee0057e 100644
--- a/website/docs/guides/sl-snowflake-qs.md
+++ b/website/docs/guides/sl-snowflake-qs.md
@@ -619,6 +619,11 @@ select * from final
In the following steps, semantic models enable you to define how to interpret the data related to orders. It includes entities (like ID columns serving as keys for joining data), dimensions (for grouping or filtering data), and measures (for data aggregations).
1. In the `metrics` sub-directory, create a new file `fct_orders.yml`.
+
+:::tip
+Make sure to save all semantic models and metrics under the directory defined in the [`model-paths`](/reference/project-configs/model-paths) (or a subdirectory of it, like `models/semantic_models/`). If you save them outside of this path, it will result in an empty `semantic_manifest.json` file, and your semantic models or metrics won't be recognized.
+:::
+
2. Add the following code to that newly created file:
@@ -765,7 +770,11 @@ There are different types of metrics you can configure:
Once you've created your semantic models, it's time to start referencing those measures you made to create some metrics:
-Add metrics to your `fct_orders.yml` semantic model file:
+1. Add metrics to your `fct_orders.yml` semantic model file:
+
+:::tip
+Make sure to save all semantic models and metrics under the directory defined in the [`model-paths`](/reference/project-configs/model-paths) (or a subdirectory of it, like `models/semantic_models/`). If you save them outside of this path, it will result in an empty `semantic_manifest.json` file, and your semantic models or metrics won't be recognized.
+:::
@@ -946,15 +955,6 @@ https://github.com/dbt-labs/docs.getdbt.com/blob/current/website/snippets/_sl-ru
-
-
-What’s happening internally?
-
-- Merging the code into your main branch allows dbt Cloud to pull those changes and build the definition in the manifest produced by the run.
-- Re-running the job in the deployment environment helps materialize the models, which the metrics depend on, in the data platform. It also makes sure that the manifest is up to date.
-- The Semantic Layer APIs pull in the most recent manifest and enables your integration to extract metadata from it.
-
-
## Set up dbt Semantic Layer
diff --git a/website/docs/reference/dbt-jinja-functions/execute.md b/website/docs/reference/dbt-jinja-functions/execute.md
index f99bfa64734..65cd4708dc8 100644
--- a/website/docs/reference/dbt-jinja-functions/execute.md
+++ b/website/docs/reference/dbt-jinja-functions/execute.md
@@ -9,7 +9,7 @@ description: "Use `execute` to return True when dbt is in 'execute' mode."
When you execute a `dbt compile` or `dbt run` command, dbt:
-1. Reads all of the files in your project and generates a "manifest" comprised of models, tests, and other graph nodes present in your project. During this phase, dbt uses the `ref` statements it finds to generate the DAG for your project. **No SQL is run during this phase**, and `execute == False`.
+1. Reads all of the files in your project and generates a [manifest](/reference/artifacts/manifest-json) comprised of models, tests, and other graph nodes present in your project. During this phase, dbt uses the [`ref`](/reference/dbt-jinja-functions/ref) and [`source`](/reference/dbt-jinja-functions/source) statements it finds to generate the DAG for your project. **No SQL is run during this phase**, and `execute == False`.
2. Compiles (and runs) each node (eg. building models, or running tests). **SQL is run during this phase**, and `execute == True`.
Any Jinja that relies on a result being returned from the database will error during the parse phase. For example, this SQL will return an error:
diff --git a/website/docs/reference/global-configs/about-global-configs.md b/website/docs/reference/global-configs/about-global-configs.md
index bbbe63ac439..3708b8c96be 100644
--- a/website/docs/reference/global-configs/about-global-configs.md
+++ b/website/docs/reference/global-configs/about-global-configs.md
@@ -16,7 +16,7 @@ There is a significant overlap between dbt's flags and dbt's command line option
### Setting flags
There are multiple ways of setting flags, which depend on the use case:
-- **[Project-level `flags` in `dbt_project.yml`](/reference/global-configs/project-flags):** Define version-controlled defaults for everyone running this project. Preserve [legacy behaviors](/reference/global-configs/legacy-behaviors) until their slated deprecation.
+- **[Project-level `flags` in `dbt_project.yml`](/reference/global-configs/project-flags):** Define version-controlled defaults for everyone running this project. Also, opt in or opt out of [behavior changes](/reference/global-configs/behavior-changes) to manage your migration off legacy functionality.
- **[Environment variables](/reference/global-configs/environment-variable-configs):** Define different behavior in different runtime environments (development vs. production vs. [continuous integration](/docs/deploy/continuous-integration), or different behavior for different users in development (based on personal preferences).
- **[CLI options](/reference/global-configs/command-line-options):** Define behavior specific to _this invocation_. Supported for all dbt commands.
@@ -41,7 +41,7 @@ dbt run --no-fail-fast # set to False
There are two categories of exceptions:
1. **Flags setting file paths:** Flags for file paths that are relevant to runtime execution (for example, `--log-path` or `--state`) cannot be set in `dbt_project.yml`. To override defaults, pass CLI options or set environment variables (`DBT_LOG_PATH`, `DBT_STATE`). Flags that tell dbt where to find project resources (for example, `model-paths`) are set in `dbt_project.yml`, but as a top-level key, outside the `flags` dictionary; these configs are expected to be fully static and never vary based on the command or execution environment.
-2. **Opt-in flags:** Flags opting into [legacy dbt behaviors](/reference/global-configs/legacy-behaviors) can _only_ be defined in `dbt_project.yml`. These are intended to be set in version control and migrated via pull/merge request. Their values should not diverge indefinitely across invocations, environments, or users.
+2. **Opt-in flags:** Flags opting in or out of [behavior changes](/reference/global-configs/behavior-changes) can _only_ be defined in `dbt_project.yml`. These are intended to be set in version control and migrated via pull/merge request. Their values should not diverge indefinitely across invocations, environments, or users.
### Accessing flags
@@ -84,7 +84,7 @@ Because the values of `flags` can differ across invocations, we strongly advise
| [quiet](/reference/global-configs/logs#suppress-non-error-logs-in-output) | boolean | False | ❌ | `DBT_QUIET` | `--quiet` | ✅ |
| [resource-type](/reference/global-configs/resource-type) (v1.8+) | string | None | ❌ | `DBT_RESOURCE_TYPES`
`DBT_EXCLUDE_RESOURCE_TYPES` | `--resource-type`
`--exclude-resource-type` | ✅ |
| [send_anonymous_usage_stats](/reference/global-configs/usage-stats) | boolean | True | ✅ | `DBT_SEND_ANONYMOUS_USAGE_STATS` | `--send-anonymous-usage-stats`, `--no-send-anonymous-usage-stats` | ❌ |
-| [source_freshness_run_project_hooks](/reference/global-configs/legacy-behaviors#source_freshness_run_project_hooks) | boolean | False | ✅ | ❌ | ❌ | ❌ |
+| [source_freshness_run_project_hooks](/reference/global-configs/behavior-changes#source_freshness_run_project_hooks) | boolean | False | ✅ | ❌ | ❌ | ❌ |
| [state](/reference/node-selection/defer) | path | none | ❌ | `DBT_STATE`, `DBT_DEFER_STATE` | `--state`, `--defer-state` | ❌ |
| [static_parser](/reference/global-configs/parsing#static-parser) | boolean | True | ✅ | `DBT_STATIC_PARSER` | `--static-parser`, `--no-static-parser` | ❌ |
| [store_failures](/reference/resource-configs/store_failures) | boolean | False | ✅ (as resource config) | `DBT_STORE_FAILURES` | `--store-failures`, `--no-store-failures` | ✅ |
diff --git a/website/docs/reference/global-configs/legacy-behaviors.md b/website/docs/reference/global-configs/behavior-changes.md
similarity index 75%
rename from website/docs/reference/global-configs/legacy-behaviors.md
rename to website/docs/reference/global-configs/behavior-changes.md
index 1450fda1459..20f5722b944 100644
--- a/website/docs/reference/global-configs/legacy-behaviors.md
+++ b/website/docs/reference/global-configs/behavior-changes.md
@@ -1,7 +1,7 @@
---
-title: "Legacy behaviors"
-id: "legacy-behaviors"
-sidebar: "Legacy behaviors"
+title: "Behavior changes"
+id: "behavior-changes"
+sidebar: "Behavior changes"
---
Most flags exist to configure runtime behaviors with multiple valid choices. The right choice may vary based on the environment, user preference, or the specific invocation.
@@ -12,10 +12,31 @@ Another category of flags provides existing projects with a migration window for
- Providing maintainability of dbt software. Every fork in behavior requires additional testing & cognitive overhead that slows future development. These flags exist to facilitate migration from "current" to "better," not to stick around forever.
These flags go through three phases of development:
-1. **Introduction (disabled by default):** dbt adds logic to support both 'old' + 'new' behaviors. The 'new' behavior is gated behind a flag, disabled by default, preserving the old behavior.
+1. **Introduction (disabled by default):** dbt adds logic to support both 'old' and 'new' behaviors. The 'new' behavior is gated behind a flag, disabled by default, preserving the old behavior.
2. **Maturity (enabled by default):** The default value of the flag is switched, from `false` to `true`, enabling the new behavior by default. Users can preserve the 'old' behavior and opt out of the 'new' behavior by setting the flag to `false` in their projects. They may see deprecation warnings when they do so.
3. **Removal (generally enabled):** After marking the flag for deprecation, we remove it along with the 'old' behavior it supported from the dbt codebases. We aim to support most flags indefinitely, but we're not committed to supporting them forever. If we choose to remove a flag, we'll offer significant advance notice.
+## What is a behavior change?
+
+The same dbt project code and the same dbt commands return one result before the behavior change, and they return a different result after the behavior change.
+
+Examples of behavior changes:
+- dbt begins raising a validation _error_ that it didn't previously.
+- dbt changes the signature of a built-in macro. Your project has a custom reimplementation of that macro. This could lead to errors, because your custom reimplementation will be passed arguments it cannot accept.
+- A dbt adapter renames or removes a method that was previously available on the `{{ adapter }}` object in the dbt-Jinja context.
+- dbt makes a breaking change to contracted metadata artifacts by deleting a required field, changing the name or type of an existing field, or removing the default value of an existing field ([README](https://github.com/dbt-labs/dbt-core/blob/37d382c8e768d1e72acd767e0afdcb1f0dc5e9c5/core/dbt/artifacts/README.md#breaking-changes)).
+- dbt removes one of the fields from [structured logs](/reference/events-logging#structured-logging).
+
+The following are **not** behavior changes:
+- Fixing a bug where the previous behavior was defective, undesirable, or undocumented.
+- dbt begins raising a _warning_ that it didn't previously.
+- dbt updates the language of human-friendly messages in log events.
+- dbt makes a non-breaking change to contracted metadata artifacts by adding a new field with a default, or deleting a field with a default ([README](https://github.com/dbt-labs/dbt-core/blob/37d382c8e768d1e72acd767e0afdcb1f0dc5e9c5/core/dbt/artifacts/README.md#non-breaking-changes)).
+
+The vast majority of changes are not behavior changes. Because introducing these changes does not require any action on the part of users, they are included in continuous releases of dbt Cloud and patch releases of dbt Core.
+
+By contrast, behavior change migrations happen slowly, over the course of months, facilitated by behavior change flags. The flags are loosely coupled to the specific dbt runtime version. By setting flags, users have control over opting in (and later opting out) of these changes.
+
## Behavior change flags
These flags _must_ be set in the `flags` dictionary in `dbt_project.yml`. They configure behaviors closely tied to project code, which means they should be defined in version control and modified through pull or merge requests, with the same testing and peer review.
diff --git a/website/docs/reference/global-configs/project-flags.md b/website/docs/reference/global-configs/project-flags.md
index 896276d9735..cdbe3463b14 100644
--- a/website/docs/reference/global-configs/project-flags.md
+++ b/website/docs/reference/global-configs/project-flags.md
@@ -17,7 +17,7 @@ flags:
Reference the [table of all flags](/reference/global-configs/about-global-configs#available-flags) to see which global configs are available for setting in [`dbt_project.yml`](/reference/dbt_project.yml).
-The `flags` dictionary is the _only_ place you can opt out of [behavior changes](/reference/global-configs/legacy-behaviors), while the legacy behavior is still supported.
+The `flags` dictionary is the _only_ place you can opt out of [behavior changes](/reference/global-configs/behavior-changes), while the legacy behavior is still supported.
diff --git a/website/docs/reference/resource-configs/pre-hook-post-hook.md b/website/docs/reference/resource-configs/pre-hook-post-hook.md
index bf4375c9490..e1e7d67f02e 100644
--- a/website/docs/reference/resource-configs/pre-hook-post-hook.md
+++ b/website/docs/reference/resource-configs/pre-hook-post-hook.md
@@ -45,6 +45,18 @@ select ...
```
+
+
+
+
+```yml
+models:
+ - name: []
+ config:
+ [pre_hook](/reference/resource-configs/pre-hook-post-hook): | []
+ [post_hook](/reference/resource-configs/pre-hook-post-hook): | []
+```
+
@@ -66,6 +78,18 @@ seeds:
+
+
+```yml
+seeds:
+ - name: []
+ config:
+ [pre_hook](/reference/resource-configs/pre-hook-post-hook): | []
+ [post_hook](/reference/resource-configs/pre-hook-post-hook): | []
+```
+
+
+
@@ -102,6 +126,18 @@ select ...
+
+
+```yml
+snapshots:
+ - name: []
+ config:
+ [pre_hook](/reference/resource-configs/pre-hook-post-hook): | []
+ [post_hook](/reference/resource-configs/pre-hook-post-hook): | []
+```
+
+
+
diff --git a/website/docs/reference/seed-configs.md b/website/docs/reference/seed-configs.md
index dd733795eef..5d5c39071d6 100644
--- a/website/docs/reference/seed-configs.md
+++ b/website/docs/reference/seed-configs.md
@@ -113,8 +113,8 @@ seeds:
config:
[enabled](/reference/resource-configs/enabled): true | false
[tags](/reference/resource-configs/tags):
| []
- [pre-hook](/reference/resource-configs/pre-hook-post-hook): | []
- [post-hook](/reference/resource-configs/pre-hook-post-hook): | []
+ [pre_hook](/reference/resource-configs/pre-hook-post-hook): | []
+ [post_hook](/reference/resource-configs/pre-hook-post-hook): | []
[database](/reference/resource-configs/database):
[schema](/reference/resource-properties/schema):
[alias](/reference/resource-configs/alias):
diff --git a/website/sidebars.js b/website/sidebars.js
index d839e05e184..fe1118b3be2 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -587,10 +587,33 @@ const sidebarSettings = {
label: "Quickstart with the dbt Cloud Semantic Layer",
href: `/guides/sl-snowflake-qs`,
},
- "docs/use-dbt-semantic-layer/setup-sl",
- "docs/use-dbt-semantic-layer/sl-architecture",
- "docs/use-dbt-semantic-layer/exports",
- "docs/use-dbt-semantic-layer/sl-cache",
+ {
+ type: "category",
+ label: "Configure",
+ link: { type: "doc", id: "docs/use-dbt-semantic-layer/setup-sl" },
+ items: [
+ "docs/use-dbt-semantic-layer/setup-sl",
+ "docs/use-dbt-semantic-layer/sl-architecture",
+ ]
+ },
+ {
+ type: "category",
+ label: "Deploy metrics",
+ link: { type: "doc", id: "docs/use-dbt-semantic-layer/deploy-sl" },
+ items: [
+ "docs/use-dbt-semantic-layer/deploy-sl",
+ "docs/use-dbt-semantic-layer/exports",
+ "docs/use-dbt-semantic-layer/sl-cache"
+ ]
+ },
+ {
+ type: "category",
+ label: "Consume",
+ link: { type: "doc", id: "docs/use-dbt-semantic-layer/consume-metrics" },
+ items: [
+ "docs/use-dbt-semantic-layer/consume-metrics",
+ ]
+ },
"docs/use-dbt-semantic-layer/sl-faqs",
],
},
@@ -1074,6 +1097,7 @@ const sidebarSettings = {
},
items: [
"reference/global-configs/about-global-configs",
+ "reference/global-configs/behavior-changes",
{
type: "category",
label: "Setting flags",
@@ -1092,7 +1116,6 @@ const sidebarSettings = {
"reference/global-configs/failing-fast",
"reference/global-configs/indirect-selection",
"reference/global-configs/json-artifacts",
- "reference/global-configs/legacy-behaviors",
"reference/global-configs/parsing",
"reference/global-configs/print-output",
"reference/global-configs/record-timing-info",
diff --git a/website/snippets/_new-sl-setup.md b/website/snippets/_new-sl-setup.md
index ed0fa86f8b2..b9c64bc36f6 100644
--- a/website/snippets/_new-sl-setup.md
+++ b/website/snippets/_new-sl-setup.md
@@ -13,32 +13,41 @@ Select the environment where you want to enable the Semantic Layer:
2. On the **Settings** left sidebar, select the specific project you want to enable the Semantic Layer for.
3. In the **Project details** page, navigate to the **Semantic Layer** section. Select **Configure Semantic Layer**.
-
+
4. In the **Set Up Semantic Layer Configuration** page, select the deployment environment you want for the Semantic Layer and click **Save**. This provides administrators with the flexibility to choose the environment where the Semantic Layer will be enabled.
-:::tip dbt Cloud Enterprise can skip to [Add more credentials](#4-add-more-credentials)
-dbt Cloud Enterprise plans can add multiple credentials and have a different set up. Skip to [Add more credentials](#4-add-more-credentials) for more configuration details.
-:::
+
### 2. Add a credential and create service tokens
-The dbt Semantic Layer uses [service tokens](/docs/dbt-cloud-apis/service-tokens) for authentication which are tied to an underlying data platform credential that you configure. The credential configured is used to execute queries that the Semantic Layer issues against your data platform. This credential controls the physical access to underlying data accessed by the Semantic Layer, and all access policies set in the data platform for this credential will be respected.
+The dbt Semantic Layer uses [service tokens](/docs/dbt-cloud-apis/service-tokens) for authentication which are tied to an underlying data platform credential that you configure. The credential configured is used to execute queries that the Semantic Layer issues against your data platform.
+
+This credential controls the physical access to underlying data accessed by the Semantic Layer, and all access policies set in the data platform for this credential will be respected.
+
+| Feature | Team plan | Enterprise plan |
+| --- | :---: | :---: |
+| Service tokens | Can create multiple service tokens linked to one credential. | Can use multiple credentials and link multiple service tokens to each credential. Note that you cannot link a single service token to more than one credential. |
+| Credentials per project | One credential per project. | Can [add multiple](#4-add-more-credentials) credentials per project. |
+| Link multiple service tokens to a single credential | ✅ | ✅ |
-dbt Cloud Enterprise plans can add multiple credentials and map those to service tokens. Refer to [Add more credentials](#4-add-more-credentials) for more information.
+*If you're on a Team plan and need to add more credentials, consider upgrading to our [Enterprise plan](https://www.getdbt.com/contact). Enterprise users can refer to [Add more credentials](#4-add-more-credentials) for detailed steps on adding multiple credentials.*
-1. In the **Set Up Semantic Layer Configuration** page, enter the credentials specific to your data platform that you want the Semantic Layer to use.
+1. After selecting the deployment environment, you should see the **Credentials & service tokens** page.
+2. Click the **Add Semantic Layer credential** button.
+3. In the **1. Add credentials** section, enter the credentials specific to your data platform that you want the Semantic Layer to use.
- Use credentials with minimal privileges. The Semantic Layer requires read access to the schema(s) containing the dbt models used in your semantic models for downstream applications
- Note, environment variables such as `{{env_var('DBT_WAREHOUSE') }}`, aren't supported in the dbt Semantic Layer yet. You must use the actual credentials.
-
-1. Create a **Service Token** after you add the credential.
- * Enterprise plans: Name and generate a service token on the credential page directly.
- * Team plans: You can return to the **Project Details** page and click the **Generate a Service Token** button.
-2. Name the token and save it. Once the token is generated, you won't be able to view this token again so make sure to record it somewhere safe.
+
+
+4. After adding credentials, scroll to **2. Map new service token**.
+5. Name the token and ensure the permission set includes 'Semantic Layer Only' and 'Metadata Only'.
+6. Click **Save**. Once the token is generated, you won't be able to view this token again so make sure to record it somewhere safe.
:::info
-Teams plans can create multiple service tokens that map to one underlying credential. Adding [multiple credentials](#4-add-more-credentials) for tailored access is available for Enterprise plans.
+- Team plans can create multiple service tokens that link to a single underlying credential, but each project can only have one credential.
+- Enterprise plans can [add multiple credentials](#4-add-more-credentials) and map those to service tokens for tailored access.
Book a free live demo to discover the full potential of dbt Cloud Enterprise.
:::
@@ -63,20 +72,35 @@ Note that:
To add multiple credentials and map them to service tokens:
-1. After configuring your environment, on the **Credentials & service tokens** page click the **Add Semantic Layer credential** button to configure the credential for your data platform.
-2. On the **Create New Semantic Layer Credential** page, you can create multiple credentials and map them to a service token.
-3. In the **Add credentials** section, fill in the data platform's credential fields. We recommend using “read-only” credentials.
+1. After configuring your environment, on the **Credentials & service tokens** page, click the **Add Semantic Layer credential** button to create multiple credentials and map them to a service token.
+2. In the **1. Add credentials** section, fill in the data platform's credential fields. We recommend using “read-only” credentials.
-4. In the **Map new service token** section, map a service token to the credential you configured in the previous step. dbt Cloud automatically selects the service token permission set you need (Semantic Layer Only and Metadata Only).
- - To add another service token, click **Add service token** under the **Linked service tokens** section.
-5. Click **Save** to link the service token to the credential. Remember to copy and save the service token securely, as it won't be viewable again after generation.
-
+3. In the **2. Map new service token** section, map a service token to the credential you configured in the previous step. dbt Cloud automatically selects the service token permission set you need (Semantic Layer Only and Metadata Only).
+
+4. To add another service token during configuration, click **Add Service Token**.
+5. You can link more service tokens to the same credential later on in the **Semantic Layer Configuration Details** page. To add another service token to an existing Semantic Layer configuration, click **Add service token** under the **Linked service tokens** section.
+6. Click **Save** to link the service token to the credential. Remember to copy and save the service token securely, as it won't be viewable again after generation.
+
+
+7. To delete a credential, go back to the **Credentials & service tokens** page.
+8. Under **Linked Service Tokens**, click **Edit** and, select **Delete Credential** to remove a credential.
-6. To delete a credential, go back to the **Semantic Layer & Credential**s page. Select **Delete credential** to remove a credential and click **Save**.
-
When you delete a credential, any service tokens mapped to that credential in the project will no longer work and will break for any end users.
+## Delete configuration
+You can delete the entire Semantic Layer configuration for a project. Note that deleting the Semantic Layer configuration will remove all credentials and unlink all service tokens to the project. It will also cause all queries to the Semantic Layer to fail.
+
+Follow these steps to delete the Semantic Layer configuration for a project:
+
+1. Navigate to the **Project details** page.
+2. In the **Semantic Layer** section, select **Delete Semantic Layer**.
+3. Confirm the deletion by clicking **Yes, delete semantic layer** in the confirmation pop up.
+
+To re-enable the dbt Semantic Layer setup in the future, you will need to recreate your setup configurations by following the [previous steps](#set-up-dbt-semantic-layer). If your semantic models and metrics are still in your project, no changes are needed. If you've removed them, you'll need to set up the YAML configs again.
+
+
+
## Additional configuration
The following are the additional flexible configurations for Semantic Layer credentials.
diff --git a/website/snippets/_sl-run-prod-job.md b/website/snippets/_sl-run-prod-job.md
index 8eb4049efc8..f820b7f3f79 100644
--- a/website/snippets/_sl-run-prod-job.md
+++ b/website/snippets/_sl-run-prod-job.md
@@ -1,9 +1,22 @@
-Once you’ve committed and merged your metric changes in your dbt project, you can perform a job run in your deployment environment in dbt Cloud to materialize your metrics. The deployment environment is only supported for the dbt Semantic Layer currently.
+This section explains how you can perform a job run in your deployment environment in dbt Cloud to materialize and deploy your metrics. Currently, the deployment environment is only supported.
-1. In dbt Cloud, create a new [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) or use an existing environment on dbt 1.6 or higher.
+1. Once you’ve [defined your semantic models and metrics](/guides/sl-snowflake-qs?step=10), commit and merge your metric changes in your dbt project.
+2. In dbt Cloud, create a new [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) or use an existing environment on dbt 1.6 or higher.
* Note — Deployment environment is currently supported (_development experience coming soon_)
-2. To create a new environment, navigate to **Deploy** in the navigation menu, select **Environments**, and then select **Create new environment**.
-3. Fill in your deployment credentials with your Snowflake username and password. You can name the schema anything you want. Click **Save** to create your new production environment.
-4. [Create a new deploy job](/docs/deploy/deploy-jobs#create-and-schedule-jobs) that runs in the environment you just created. Go back to the **Deploy** menu, select **Jobs**, select **Create job**, and click **Deploy job**.
-5. Set the job to run a `dbt build` and select the **Generate docs on run** checkbox.
-6. Run the job and make sure it runs successfully.
+3. To create a new environment, navigate to **Deploy** in the navigation menu, select **Environments**, and then select **Create new environment**.
+4. Fill in your deployment credentials with your Snowflake username and password. You can name the schema anything you want. Click **Save** to create your new production environment.
+5. [Create a new deploy job](/docs/deploy/deploy-jobs#create-and-schedule-jobs) that runs in the environment you just created. Go back to the **Deploy** menu, select **Jobs**, select **Create job**, and click **Deploy job**.
+6. Set the job to run a `dbt parse` job to parse your projects and generate a [`semantic_manifest.json` artifact](/docs/dbt-cloud-apis/sl-manifest) file. Although running `dbt build` isn't required, you can choose to do so if needed.
+7. Run the job by clicking the **Run now** button. Monitor the job's progress in real-time through the **Run summary** tab.
+
+ Once the job completes successfully, your dbt project, including the generated documentation, will be fully deployed and available for use in your production environment. If any issues arise, review the logs to diagnose and address any errors.
+
+
+
+What’s happening internally?
+
+- Merging the code into your main branch allows dbt Cloud to pull those changes and build the definition in the manifest produced by the run.
+- Re-running the job in the deployment environment helps materialize the models, which the metrics depend on, in the data platform. It also makes sure that the manifest is up to date.
+- The Semantic Layer APIs pull in the most recent manifest and enables your integration to extract metadata from it.
+
+
diff --git a/website/src/theme/DocRoot/Layout/Main/index.js b/website/src/theme/DocRoot/Layout/Main/index.js
index 7303e484863..458cb9d8716 100644
--- a/website/src/theme/DocRoot/Layout/Main/index.js
+++ b/website/src/theme/DocRoot/Layout/Main/index.js
@@ -71,7 +71,7 @@ export default function DocRootLayoutMain({
} else {
setPreData({
showisPrereleaseBanner: true,
- isPrereleaseBannerText: `You are currently viewing v${dbtVersion}, which is a prerelease of dbt Core. The latest stable version is v${latestStableRelease}`,
+ isPrereleaseBannerText: `You are viewing the docs for a prerelease version of dbt Core. There may be features described that are still in development, incomplete, or unstable. For the latest generally available features, install the latest stable version`,
});
}
// If EOLDate not set for version, do not show banner
@@ -86,12 +86,12 @@ export default function DocRootLayoutMain({
if (new Date() > new Date(EOLDate)) {
setEOLData({
showEOLBanner: true,
- EOLBannerText: `This version of dbt Core is no longer supported. No patch releases will be made, even for critical security issues. For better performance, improved security, and new features, you should upgrade to ${latestStableRelease}, the latest stable version.`,
+ EOLBannerText: `This version of dbt Core is no longer supported. There will be no more patches or security fixes. For improved performance, security, and features, upgrade to the latest stable version.`,
});
} else if (new Date() > threeMonths) {
setEOLData({
showEOLBanner: true,
- EOLBannerText: `This version of dbt Core is nearing the end of its critical support period. For better performance, improved security, and new features, you should upgrade to ${latestStableRelease}, the latest stable version.`,
+ EOLBannerText: `This version of dbt Core is nearing the end of its critical support period. For improved perfomance, security, and features, upgrade to the latest stable version.`,
});
} else {
setEOLData({
diff --git a/website/static/img/docs/collaborate/dbt-explorer/model-consumption-lenses.jpg b/website/static/img/docs/collaborate/dbt-explorer/model-consumption-lenses.jpg
new file mode 100644
index 00000000000..9bf6c7ca0e3
Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/model-consumption-lenses.jpg differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/model-consumption-list.jpg b/website/static/img/docs/collaborate/dbt-explorer/model-consumption-list.jpg
new file mode 100644
index 00000000000..653fe7a2f43
Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/model-consumption-list.jpg differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/model-query-lenses.jpg b/website/static/img/docs/collaborate/dbt-explorer/model-query-lenses.jpg
deleted file mode 100644
index caa0cc72d67..00000000000
Binary files a/website/static/img/docs/collaborate/dbt-explorer/model-query-lenses.jpg and /dev/null differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/model-query-list.jpg b/website/static/img/docs/collaborate/dbt-explorer/model-query-list.jpg
deleted file mode 100644
index 14c5c1ceb9c..00000000000
Binary files a/website/static/img/docs/collaborate/dbt-explorer/model-query-list.jpg and /dev/null differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/model-query-queried-models.jpg b/website/static/img/docs/collaborate/dbt-explorer/model-query-queried-models.jpg
deleted file mode 100644
index 6b20b501880..00000000000
Binary files a/website/static/img/docs/collaborate/dbt-explorer/model-query-queried-models.jpg and /dev/null differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/model-query-usage-queries.jpg b/website/static/img/docs/collaborate/dbt-explorer/model-query-usage-queries.jpg
deleted file mode 100644
index 41857b3a482..00000000000
Binary files a/website/static/img/docs/collaborate/dbt-explorer/model-query-usage-queries.jpg and /dev/null differ
diff --git a/website/static/img/docs/collaborate/dbt-explorer/most-consumed-models.jpg b/website/static/img/docs/collaborate/dbt-explorer/most-consumed-models.jpg
new file mode 100644
index 00000000000..9e14db15f90
Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/most-consumed-models.jpg differ
diff --git a/website/static/img/docs/collaborate/model-consumption-queries.jpg b/website/static/img/docs/collaborate/model-consumption-queries.jpg
new file mode 100644
index 00000000000..7fe9b23866c
Binary files /dev/null and b/website/static/img/docs/collaborate/model-consumption-queries.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/semantic-layer/sl-add-credential.jpg b/website/static/img/docs/dbt-cloud/semantic-layer/sl-add-credential.jpg
index b2139da47b0..30baa7acf31 100644
Binary files a/website/static/img/docs/dbt-cloud/semantic-layer/sl-add-credential.jpg and b/website/static/img/docs/dbt-cloud/semantic-layer/sl-add-credential.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/semantic-layer/sl-configure-sl.jpg b/website/static/img/docs/dbt-cloud/semantic-layer/sl-configure-sl.jpg
deleted file mode 100644
index fc44f409efe..00000000000
Binary files a/website/static/img/docs/dbt-cloud/semantic-layer/sl-configure-sl.jpg and /dev/null differ
diff --git a/website/static/img/docs/dbt-cloud/semantic-layer/sl-create-service-token-page.jpg b/website/static/img/docs/dbt-cloud/semantic-layer/sl-create-service-token-page.jpg
index 8e288183be2..da7a57a3d99 100644
Binary files a/website/static/img/docs/dbt-cloud/semantic-layer/sl-create-service-token-page.jpg and b/website/static/img/docs/dbt-cloud/semantic-layer/sl-create-service-token-page.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/semantic-layer/sl-credential-created.jpg b/website/static/img/docs/dbt-cloud/semantic-layer/sl-credential-created.jpg
deleted file mode 100644
index 8c0081129fa..00000000000
Binary files a/website/static/img/docs/dbt-cloud/semantic-layer/sl-credential-created.jpg and /dev/null differ
diff --git a/website/static/img/docs/dbt-cloud/semantic-layer/sl-credentials-service-token.jpg b/website/static/img/docs/dbt-cloud/semantic-layer/sl-credentials-service-token.jpg
new file mode 100644
index 00000000000..7d302201e1f
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/semantic-layer/sl-credentials-service-token.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/semantic-layer/sl-delete-config.jpg b/website/static/img/docs/dbt-cloud/semantic-layer/sl-delete-config.jpg
new file mode 100644
index 00000000000..c53c3e9d302
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/semantic-layer/sl-delete-config.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/semantic-layer/sl-select-env.jpg b/website/static/img/docs/dbt-cloud/semantic-layer/sl-select-env.jpg
new file mode 100644
index 00000000000..f19cb22f2cf
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/semantic-layer/sl-select-env.jpg differ
diff --git a/website/vercel.json b/website/vercel.json
index 8fdf311e72f..f79fc959187 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -599,6 +599,11 @@
"destination": "/reference/global-configs/command-line-options",
"permanent": true
},
+ {
+ "source": "/reference/global-configs/legacy-behaviors",
+ "destination": "/reference/global-configs/behavior-changes",
+ "permanent": true
+ },
{
"source": "/reference/global-configs/yaml-configurations",
"destination": "/reference/global-configs/project-flags",