{filteredData && filteredData.length > 0 ? (
From a6c73591918086fe1fa3607c321b53a098ace688 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Wed, 8 Nov 2023 17:30:13 -0800
Subject: [PATCH 17/59] fixing links
---
...5-how-to-build-a-mature-dbt-project-from-scratch.md | 2 +-
.../blog/2023-04-24-framework-refactor-alteryx-dbt.md | 4 ++--
.../materializations-guide-1-guide-overview.md | 2 +-
.../release-notes/09-April-2023/product-docs.md | 6 +++---
website/docs/docs/deploy/webhooks.md | 4 ++--
.../productionize-your-dbt-databricks-project.md | 2 +-
website/src/pages/index.js | 6 +++---
website/vercel.json | 10 +++++-----
8 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md b/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md
index c4de04a48c3..8ea387cf00c 100644
--- a/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md
+++ b/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md
@@ -69,7 +69,7 @@ In addition to learning the basic pieces of dbt, we're familiarizing ourselves w
If we decide not to do this, we end up missing out on what the dbt workflow has to offer. If you want to learn more about why we think analytics engineering with dbt is the way to go, I encourage you to read the [dbt Viewpoint](/community/resources/viewpoint#analytics-is-collaborative)!
-In order to learn the basics, we’re going to [port over the SQL file](/guides/migration/tools/refactoring-legacy-sql) that powers our existing "patient_claim_summary" report that we use in our KPI dashboard in parallel to our old transformation process. We’re not ripping out the old plumbing just yet. In doing so, we're going to try dbt on for size and get used to interfacing with a dbt project.
+In order to learn the basics, we’re going to [port over the SQL file](/guides/refactoring-legacy-sql) that powers our existing "patient_claim_summary" report that we use in our KPI dashboard in parallel to our old transformation process. We’re not ripping out the old plumbing just yet. In doing so, we're going to try dbt on for size and get used to interfacing with a dbt project.
**Project Appearance**
diff --git a/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md b/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
index c5b677f7f3e..9b6135b0984 100644
--- a/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
+++ b/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
@@ -94,7 +94,7 @@ It is essential to click on each data source (the green book icons on the leftmo
For this step, we identified which operators were used in the data source (for example, joining data, order columns, group by, etc). Usually the Alteryx operators are pretty self-explanatory and all the information needed for understanding appears on the left side of the menu. We also checked the documentation to understand how each Alteryx operator works behind the scenes.
-We followed dbt Labs' guide on how to refactor legacy SQL queries in dbt and some [best practices](https://docs.getdbt.com/guides/migration/tools/refactoring-legacy-sql). After we finished refactoring all the Alteryx workflows, we checked if the Alteryx output matched the output of the refactored model built on dbt.
+We followed dbt Labs' guide on how to refactor legacy SQL queries in dbt and some [best practices](https://docs.getdbt.com/guides/refactoring-legacy-sql). After we finished refactoring all the Alteryx workflows, we checked if the Alteryx output matched the output of the refactored model built on dbt.
#### Step 3: Use the `audit_helper` package to audit refactored data models
@@ -131,4 +131,4 @@ As we can see, refactoring Alteryx to dbt was an important step in the direction
>
> [Audit_helper in dbt: Bringing data auditing to a higher level](https://docs.getdbt.com/blog/audit-helper-for-migration)
>
-> [Refactoring legacy SQL to dbt](https://docs.getdbt.com/guides/migration/tools/refactoring-legacy-sql)
+> [Refactoring legacy SQL to dbt](https://docs.getdbt.com/guides/refactoring-legacy-sql)
diff --git a/website/docs/best-practices/materializations/materializations-guide-1-guide-overview.md b/website/docs/best-practices/materializations/materializations-guide-1-guide-overview.md
index 467d58ce4a9..248b4c4749b 100644
--- a/website/docs/best-practices/materializations/materializations-guide-1-guide-overview.md
+++ b/website/docs/best-practices/materializations/materializations-guide-1-guide-overview.md
@@ -28,7 +28,7 @@ By the end of this guide you should have a solid understanding of:
- 📒 You’ll want to have worked through the [quickstart guide](/guides) and have a project setup to work through these concepts.
- 🏃🏻♀️ Concepts like dbt runs, `ref()` statements, and models should be familiar to you.
-- 🔧 [**Optional**] Reading through the [How we structure our dbt projects](guides/best-practices/how-we-structure/1-guide-overview) Guide will be beneficial for the last section of this guide, when we review best practices for materializations using the dbt project approach of staging models and marts.
+- 🔧 [**Optional**] Reading through the [How we structure our dbt projects](/best-practices/how-we-structure/1-guide-overview) Guide will be beneficial for the last section of this guide, when we review best practices for materializations using the dbt project approach of staging models and marts.
### Guiding principle
diff --git a/website/docs/docs/dbt-versions/release-notes/09-April-2023/product-docs.md b/website/docs/docs/dbt-versions/release-notes/09-April-2023/product-docs.md
index 84b962c56d2..5082699619b 100644
--- a/website/docs/docs/dbt-versions/release-notes/09-April-2023/product-docs.md
+++ b/website/docs/docs/dbt-versions/release-notes/09-April-2023/product-docs.md
@@ -32,9 +32,9 @@ Hello from the dbt Docs team: @mirnawong1, @matthewshaver, @nghi-ly, and @runleo
## New 📚 Guides and ✏️ blog posts
- [Use Databricks workflows to run dbt Cloud jobs](/guides/orchestration/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs)
-- [Refresh Tableau workbook with extracts after a job finishes](/guides/orchestration/webhooks/zapier-refresh-tableau-workbook)
-- [dbt Python Snowpark workshop/tutorial](/guides/dbt-ecosystem/dbt-python-snowpark/1-overview-dbt-python-snowpark)
-- [How to optimize and troubleshoot dbt Models on Databricks](/guides/dbt-ecosystem/databricks-guides/how_to_optimize_dbt_models_on_databricks)
+- [Refresh Tableau workbook with extracts after a job finishes](/guides/zapier-refresh-tableau-workbook)
+- [dbt Python Snowpark workshop/tutorial](/guides/dbt-python-snowpark)
+- [How to optimize and troubleshoot dbt Models on Databricks](/guides/optimize-dbt-models-on-databricks)
- [The missing guide to debug() in dbt](https://docs.getdbt.com/blog/guide-to-jinja-debug)
- [dbt Squared: Leveraging dbt Core and dbt Cloud together at scale](https://docs.getdbt.com/blog/dbt-squared)
- [Audit_helper in dbt: Bringing data auditing to a higher level](https://docs.getdbt.com/blog/audit-helper-for-migration)
diff --git a/website/docs/docs/deploy/webhooks.md b/website/docs/docs/deploy/webhooks.md
index 069e7a3e283..25e16e201c1 100644
--- a/website/docs/docs/deploy/webhooks.md
+++ b/website/docs/docs/deploy/webhooks.md
@@ -8,7 +8,7 @@ With dbt Cloud, you can create outbound webhooks to send events (notifications)
A webhook is an HTTP-based callback function that allows event-driven communication between two different web applications. This allows you to get the latest information on your dbt jobs in real time. Without it, you would need to make API calls repeatedly to check if there are any updates that you need to account for (polling). Because of this, webhooks are also called _push APIs_ or _reverse APIs_ and are often used for infrastructure development.
-dbt Cloud sends a JSON payload to your application's endpoint URL when your webhook is triggered. You can send a [Slack](/guides/orchestration/webhooks/zapier-slack) notification, a [Microsoft Teams](/guides/orchestration/webhooks/zapier-ms-teams) notification, [open a PagerDuty incident](/guides/orchestration/webhooks/serverless-pagerduty) when a dbt job fails, [and more](/guides/orchestration/webhooks).
+dbt Cloud sends a JSON payload to your application's endpoint URL when your webhook is triggered. You can send a [Slack](/guides/zapier-slack) notification, a [Microsoft Teams](/guides/zapier-ms-teams) notification, [open a PagerDuty incident](/guides/serverless-pagerduty) when a dbt job fails.
You can create webhooks for these events from the [dbt Cloud web-based UI](#create-a-webhook-subscription) and by using the [dbt Cloud API](#api-for-webhooks):
@@ -549,5 +549,5 @@ DELETE https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscript
## Related docs
- [dbt Cloud CI](/docs/deploy/continuous-integration)
-- [Use dbt Cloud's webhooks with other SaaS apps](/guides/orchestration/webhooks)
+- [Use dbt Cloud's webhooks with other SaaS apps](/guides)
diff --git a/website/docs/guides/productionize-your-dbt-databricks-project.md b/website/docs/guides/productionize-your-dbt-databricks-project.md
index 12060da999d..fdadcef6d34 100644
--- a/website/docs/guides/productionize-your-dbt-databricks-project.md
+++ b/website/docs/guides/productionize-your-dbt-databricks-project.md
@@ -117,7 +117,7 @@ Setting up [notifications](/docs/deploy/job-notifications) in dbt Cloud allows y
2. Select the **Notifications** tab.
3. Choose the desired notification type (Email or Slack) and configure the relevant settings.
-If you require notifications through other means than email or Slack, you can use dbt Cloud's outbound [webhooks](/docs/deploy/webhooks) feature to relay job events to other tools. Webhooks enable you to [integrate dbt Cloud with a wide range of SaaS applications](/guides/orchestration/webhooks), extending your pipeline’s automation into other systems.
+If you require notifications through other means than email or Slack, you can use dbt Cloud's outbound [webhooks](/docs/deploy/webhooks) feature to relay job events to other tools. Webhooks enable you to integrate dbt Cloud with a wide range of SaaS applications, extending your pipeline’s automation into other systems.
## Troubleshooting
diff --git a/website/src/pages/index.js b/website/src/pages/index.js
index cee3796133f..7285bad7182 100644
--- a/website/src/pages/index.js
+++ b/website/src/pages/index.js
@@ -34,7 +34,7 @@ function Home() {
const featuredResource = {
title: "How we structure our dbt projects",
description: "Our hands-on learnings for how to structure your dbt project for success and gain insights into the principles of analytics engineering.",
- link: "/guides/best-practices/how-we-structure/1-guide-overview",
+ link: "/best-practices/how-we-structure/1-guide-overview",
image: "/img/structure-dbt-projects.png",
sectionTitle: 'Featured resource'
}
@@ -146,9 +146,9 @@ function Home() {
diff --git a/website/vercel.json b/website/vercel.json
index c73534da7c2..3c2c0c6e3ce 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -2,6 +2,11 @@
"cleanUrls": true,
"trailingSlash": false,
"redirects": [
+ {
+ "source": "/faqs/Project/docs-for-multiple-projects",
+ "destination": "/docs/collaborate/explore-projects#about-project-level-lineage",
+ "permanent": true
+ },
{
"source": "/faqs/Project/docs-for-multiple-projects",
"destination": "/docs/collaborate/explore-projects#about-project-level-lineage",
@@ -4181,11 +4186,6 @@
"source": "/quickstarts/manual-install",
"destination": "/guides/manual-install",
"permanent": true
- },
- {
- "source": "TODO",
- "destination": "TODO",
- "permanent": true
}
]
}
From 59b506117cc6330fb06d42adc0056b72b46f4fba Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 11:48:24 -0800
Subject: [PATCH 18/59] fixing links
---
.../dbt-unity-catalog-best-practices.md | 8 ++++----
website/docs/community/resources/getting-help.md | 2 +-
website/docs/docs/cloud/billing.md | 2 +-
website/docs/docs/connect-adapters.md | 2 +-
website/docs/docs/contribute-core-adapters.md | 4 ++--
.../core-upgrade/07-upgrading-to-v1.1.md | 2 +-
website/docs/docs/dbt-versions/core-versions.md | 2 +-
.../04-Sept-2023/ci-updates-phase2-rn.md | 2 +-
.../04-Sept-2023/product-docs-summer-rn.md | 2 +-
.../dbt-databricks-unity-catalog-support.md | 2 +-
website/docs/docs/deploy/ci-jobs.md | 2 +-
website/docs/docs/supported-data-platforms.md | 2 +-
website/docs/docs/trusted-adapters.md | 2 +-
website/docs/docs/verified-adapters.md | 2 +-
website/docs/guides/adapter-creation.md | 5 ++---
website/docs/guides/dbt-models-on-databricks.md | 4 ++--
.../productionize-your-dbt-databricks-project.md | 16 ++++++++--------
website/docs/guides/set-up-ci.md | 6 +++---
.../guides/set-up-your-databricks-dbt-project.md | 2 +-
website/docs/reference/commands/init.md | 2 +-
website/docs/reference/events-logging.md | 2 +-
.../reference/resource-configs/no-configs.md | 2 +-
22 files changed, 37 insertions(+), 38 deletions(-)
diff --git a/website/docs/best-practices/dbt-unity-catalog-best-practices.md b/website/docs/best-practices/dbt-unity-catalog-best-practices.md
index 0d24cc320ec..89153fe1b86 100644
--- a/website/docs/best-practices/dbt-unity-catalog-best-practices.md
+++ b/website/docs/best-practices/dbt-unity-catalog-best-practices.md
@@ -60,9 +60,9 @@ Ready to start transforming your Unity Catalog datasets with dbt?
Check out the resources below for guides, tips, and best practices:
-- [How we structure our dbt projects](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview)
+- [How we structure our dbt projects](/best-practices/how-we-structure/1-guide-overview)
- [Self-paced dbt fundamentals training videos](https://courses.getdbt.com/courses/fundamentals)
-- [Customizing CI/CD](https://docs.getdbt.com/guides/orchestration/custom-cicd-pipelines/1-cicd-background) & [SQL linting](https://docs.getdbt.com/guides/orchestration/custom-cicd-pipelines/2-lint-on-push)
-- [Debugging errors](https://docs.getdbt.com/best-practices/debugging-errors)
-- [Writing custom generic tests](https://docs.getdbt.com/best-practices/writing-custom-generic-tests)
+- [Customizing CI/CD](/guides/custom-cicd-pipelines)
+- [Debugging errors](/guides/debug-errors)
+- [Writing custom generic tests](/best-practices/writing-custom-generic-tests)
- [dbt packages hub](https://hub.getdbt.com/)
diff --git a/website/docs/community/resources/getting-help.md b/website/docs/community/resources/getting-help.md
index 658f7d154db..2f30644186e 100644
--- a/website/docs/community/resources/getting-help.md
+++ b/website/docs/community/resources/getting-help.md
@@ -9,7 +9,7 @@ dbt is open source, and has a generous community behind it. Asking questions wel
#### Search the existing documentation
The docs site you're on is highly searchable, make sure to explore for the answer here as a first step. If you're new to dbt, try working through the [quickstart guide](/guides) first to get a firm foundation on the essential concepts.
#### Try to debug the issue yourself
-We have a handy guide on [debugging errors](/best-practices/debugging-errors) to help out! This guide also helps explain why errors occur, and which docs you might need to search for help.
+We have a handy guide on [debugging errors](/guides/debug-errors) to help out! This guide also helps explain why errors occur, and which docs you might need to search for help.
#### Search for answers using your favorite search engine
We're committed to making more errors searchable, so it's worth checking if there's a solution already out there! Further, some errors related to installing dbt, the SQL in your models, or getting YAML right, are errors that are not-specific to dbt, so there may be other resources to check.
diff --git a/website/docs/docs/cloud/billing.md b/website/docs/docs/cloud/billing.md
index ef3eb00a3c6..6853cc0004b 100644
--- a/website/docs/docs/cloud/billing.md
+++ b/website/docs/docs/cloud/billing.md
@@ -237,7 +237,7 @@ To understand better how long each model takes to run within the context of a sp
Once you've identified which models could be optimized, check out these other resources that walk through how to optimize your work:
* [Build scalable and trustworthy data pipelines with dbt and BigQuery](https://services.google.com/fh/files/misc/dbt_bigquery_whitepaper.pdf)
* [Best Practices for Optimizing Your dbt and Snowflake Deployment](https://www.snowflake.com/wp-content/uploads/2021/10/Best-Practices-for-Optimizing-Your-dbt-and-Snowflake-Deployment.pdf)
-* [How to optimize and troubleshoot dbt models on Databricks](/guides/dbt-ecosystem/databricks-guides/how_to_optimize_dbt_models_on_databricks)
+* [How to optimize and troubleshoot dbt models on Databricks](/guides/optimize-dbt-models-on-databricks)
## FAQs
diff --git a/website/docs/docs/connect-adapters.md b/website/docs/docs/connect-adapters.md
index 77ead34e51d..e301cfc237e 100644
--- a/website/docs/docs/connect-adapters.md
+++ b/website/docs/docs/connect-adapters.md
@@ -3,7 +3,7 @@ title: "How to connect to adapters"
id: "connect-adapters"
---
-Adapters are an essential component of dbt. At their most basic level, they are how dbt connects with the various supported data platforms. At a higher-level, adapters strive to give analytics engineers more transferrable skills as well as standardize how analytics projects are structured. Gone are the days where you have to learn a new language or flavor of SQL when you move to a new job that has a different data platform. That is the power of adapters in dbt — for more detail, read the [What are adapters](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) guide.
+Adapters are an essential component of dbt. At their most basic level, they are how dbt connects with the various supported data platforms. At a higher-level, adapters strive to give analytics engineers more transferrable skills as well as standardize how analytics projects are structured. Gone are the days where you have to learn a new language or flavor of SQL when you move to a new job that has a different data platform. That is the power of adapters in dbt — for more detail, refer to the [Build, test, document, and promote adapters](/guides/adapter-creation) guide.
This section provides more details on different ways you can connect dbt to an adapter, and explains what a maintainer is.
diff --git a/website/docs/docs/contribute-core-adapters.md b/website/docs/docs/contribute-core-adapters.md
index 553361ee1a2..d3b1edf2a38 100644
--- a/website/docs/docs/contribute-core-adapters.md
+++ b/website/docs/docs/contribute-core-adapters.md
@@ -17,6 +17,6 @@ Community-supported plugins are works in progress, and anyone is welcome to cont
### Create a new adapter
-If you see something missing from the lists above, and you're interested in developing an integration, read more about adapters and how they're developed in the [Adapter Development](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) section.
+If you see something missing from the lists above, and you're interested in developing an integration, read more about adapters and how they're developed in the [Build, test, document, and promote adapters](/guides/adapter-creation).
-If you have a new adapter, please add it to this list using a pull request! See [Documenting your adapter](/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter) for more information.
+If you have a new adapter, please add it to this list using a pull request! You can refer to [Build, test, document, and promote adapters](/guides/adapter-creation) for more information on documenting your adapter.
diff --git a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md
index 7819709558e..403264a46e6 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md
@@ -21,7 +21,7 @@ There are no breaking changes for code in dbt projects and packages. We are comm
### For maintainers of adapter plugins
-We have reworked the testing suite for adapter plugin functionality. For details on the new testing suite, see: [Testing a new adapter](/guides/dbt-ecosystem/adapter-development/4-testing-a-new-adapter).
+We have reworked the testing suite for adapter plugin functionality. For details on the new testing suite, refer to the "Test your adapter" step in the [Build, test, document, and promote adapters](/guides/adapter-creation) guide.
The abstract methods `get_response` and `execute` now only return `connection.AdapterReponse` in type hints. Previously, they could return a string. We encourage you to update your methods to return an object of class `AdapterResponse`, or implement a subclass specific to your adapter. This also gives you the opportunity to add fields specific to your adapter's query execution, such as `rows_affected` or `bytes_processed`.
diff --git a/website/docs/docs/dbt-versions/core-versions.md b/website/docs/docs/dbt-versions/core-versions.md
index 5e8e437f0b1..2467f3c946b 100644
--- a/website/docs/docs/dbt-versions/core-versions.md
+++ b/website/docs/docs/dbt-versions/core-versions.md
@@ -84,7 +84,7 @@ Like many software projects, dbt Core releases follow [semantic versioning](http
We are committed to avoiding breaking changes in minor versions for end users of dbt. There are two types of breaking changes that may be included in minor versions:
-- Changes to the [Python interface for adapter plugins](/guides/dbt-ecosystem/adapter-development/3-building-a-new-adapter). These changes are relevant _only_ to adapter maintainers, and they will be clearly communicated in documentation and release notes.
+- Changes to the Python interface for adapter plugins. These changes are relevant _only_ to adapter maintainers, and they will be clearly communicated in documentation and release notes. For more information, refer to [Build, test, document, and promote adapters](/guides/adapter-creation) guide.
- Changes to metadata interfaces, including [artifacts](/docs/deploy/artifacts) and [logging](/reference/events-logging), signalled by a version bump. Those version upgrades may require you to update external code that depends on these interfaces, or to coordinate upgrades between dbt orchestrations that share metadata, such as [state-powered selection](/reference/node-selection/syntax#about-node-selection).
### How we version adapter plugins
diff --git a/website/docs/docs/dbt-versions/release-notes/04-Sept-2023/ci-updates-phase2-rn.md b/website/docs/docs/dbt-versions/release-notes/04-Sept-2023/ci-updates-phase2-rn.md
index fd2d163b748..a8ae1ade65b 100644
--- a/website/docs/docs/dbt-versions/release-notes/04-Sept-2023/ci-updates-phase2-rn.md
+++ b/website/docs/docs/dbt-versions/release-notes/04-Sept-2023/ci-updates-phase2-rn.md
@@ -29,7 +29,7 @@ Below is a comparison table that describes how deploy jobs and CI jobs behave di
## What you need to update
-- If you want to set up a CI environment for your jobs, dbt Labs recommends that you create your CI job in a dedicated [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) that's connected to a staging database. To learn more about these environment best practices, refer to the guide [Get started with continuous integration tests](/guides/orchestration/set-up-ci/overview).
+- If you want to set up a CI environment for your jobs, dbt Labs recommends that you create your CI job in a dedicated [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) that's connected to a staging database. To learn more about these environment best practices, refer to the guide [Get started with continuous integration tests](/guides/set-up-ci).
- If you had set up a CI job before October 2, 2023, the job might've been misclassified as a deploy job with this update. Below describes how to fix the job type:
diff --git a/website/docs/docs/dbt-versions/release-notes/04-Sept-2023/product-docs-summer-rn.md b/website/docs/docs/dbt-versions/release-notes/04-Sept-2023/product-docs-summer-rn.md
index 2739ef2b7aa..e8fb9539c50 100644
--- a/website/docs/docs/dbt-versions/release-notes/04-Sept-2023/product-docs-summer-rn.md
+++ b/website/docs/docs/dbt-versions/release-notes/04-Sept-2023/product-docs-summer-rn.md
@@ -40,4 +40,4 @@ You can provide feedback by opening a pull request or issue in [our repo](https:
## New 📚 Guides, ✏️ blog posts, and FAQs
* Check out how these community members use the dbt community in the [Community spotlight](/community/spotlight).
* Blog posts published this summer include [Optimizing Materialized Views with dbt](/blog/announcing-materialized-views), [Data Vault 2.0 with dbt Cloud](/blog/data-vault-with-dbt-cloud), and [Create dbt Documentation and Tests 10x faster with ChatGPT](/blog/create-dbt-documentation-10x-faster-with-chatgpt)
-* We now have two new best practice guides: [How we build our metrics](/best-practices/how-we-build-our-metrics/semantic-layer-1-intro) and [Set up Continuous Integration](/guides/orchestration/set-up-ci/overview).
+- We now have two new best practice guides: [How we build our metrics](/best-practices/how-we-build-our-metrics/semantic-layer-1-intro) and [Set up Continuous Integration](/guides/set-up-ci).
diff --git a/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md b/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
index 25d5ca5205f..ee46cb5f558 100644
--- a/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
+++ b/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
@@ -8,6 +8,6 @@ tags: [Nov-2022, v1.1.66.15]
dbt Cloud is the easiest and most reliable way to develop and deploy a dbt project. It helps remove complexity while also giving you more features and better performance. A simpler Databricks connection experience with support for Databricks’ Unity Catalog and better modeling defaults is now available for your use.
-For all the Databricks customers already using dbt Cloud with the dbt-spark adapter, you can now [migrate](https://docs.getdbt.com/guides/migration/tools/migrating-from-spark-to-databricks#migration) your connection to the [dbt-databricks adapter](https://docs.getdbt.com/reference/warehouse-setups/databricks-setup) to get the benefits. [Databricks](https://www.databricks.com/blog/2022/11/17/introducing-native-high-performance-integration-dbt-cloud.html) is committed to maintaining and improving the adapter, so this integrated experience will continue to provide the best of dbt and Databricks.
+For all the Databricks customers already using dbt Cloud with the dbt-spark adapter, you can now [migrate](/guides/migrate-from-spark-to-databricks) your connection to the [dbt-databricks adapter](https://docs.getdbt.com/reference/warehouse-setups/databricks-setup) to get the benefits. [Databricks](https://www.databricks.com/blog/2022/11/17/introducing-native-high-performance-integration-dbt-cloud.html) is committed to maintaining and improving the adapter, so this integrated experience will continue to provide the best of dbt and Databricks.
Check out our [live blog post](https://www.getdbt.com/blog/dbt-cloud-databricks-experience/) to learn more.
diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md
index d10bc780fc2..6114ed1ca14 100644
--- a/website/docs/docs/deploy/ci-jobs.md
+++ b/website/docs/docs/deploy/ci-jobs.md
@@ -9,7 +9,7 @@ You can set up [continuous integration](/docs/deploy/continuous-integration) (CI
## Set up CI jobs {#set-up-ci-jobs}
-dbt Labs recommends that you create your CI job in a dedicated dbt Cloud [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) that's connected to a staging database. Having a separate environment dedicated for CI will provide better isolation between your temporary CI schema builds and your production data builds. Additionally, sometimes teams need their CI jobs to be triggered when a PR is made to a branch other than main. If your team maintains a staging branch as part of your release process, having a separate environment will allow you to set a [custom branch](/faqs/environments/custom-branch-settings) and, accordingly, the CI job in that dedicated environment will be triggered only when PRs are made to the specified custom branch. To learn more, refer to [Get started with CI tests](/guides/orchestration/set-up-ci/overview).
+dbt Labs recommends that you create your CI job in a dedicated dbt Cloud [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) that's connected to a staging database. Having a separate environment dedicated for CI will provide better isolation between your temporary CI schema builds and your production data builds. Additionally, sometimes teams need their CI jobs to be triggered when a PR is made to a branch other than main. If your team maintains a staging branch as part of your release process, having a separate environment will allow you to set a [custom branch](/faqs/environments/custom-branch-settings) and, accordingly, the CI job in that dedicated environment will be triggered only when PRs are made to the specified custom branch. To learn more, refer to [Get started with CI tests](/guides/set-up-ci).
### Prerequisites
- You have a dbt Cloud account.
diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md
index a8e146f49d0..c0c9a30db36 100644
--- a/website/docs/docs/supported-data-platforms.md
+++ b/website/docs/docs/supported-data-platforms.md
@@ -8,7 +8,7 @@ pagination_next: "docs/connect-adapters"
pagination_prev: null
---
-dbt connects to and runs SQL against your database, warehouse, lake, or query engine. These SQL-speaking platforms are collectively referred to as _data platforms_. dbt connects with data platforms by using a dedicated adapter plugin for each. Plugins are built as Python modules that dbt Core discovers if they are installed on your system. Read [What are Adapters](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) for more info.
+dbt connects to and runs SQL against your database, warehouse, lake, or query engine. These SQL-speaking platforms are collectively referred to as _data platforms_. dbt connects with data platforms by using a dedicated adapter plugin for each. Plugins are built as Python modules that dbt Core discovers if they are installed on your system. Refer to the [Build, test, document, and promote adapters](/guides/adapter-creation) guide. for more info.
You can [connect](/docs/connect-adapters) to adapters and data platforms natively in dbt Cloud or install them manually using dbt Core.
diff --git a/website/docs/docs/trusted-adapters.md b/website/docs/docs/trusted-adapters.md
index 08191e8ea42..20d61f69575 100644
--- a/website/docs/docs/trusted-adapters.md
+++ b/website/docs/docs/trusted-adapters.md
@@ -21,7 +21,7 @@ pendency on this library?
### Trusted adapter specifications
-See [Building a Trusted Adapter](/guides/dbt-ecosystem/adapter-development/8-building-a-trusted-adapter) for more information, particularly if you are an adapter maintainer considering having your adapter be added to the trusted list.
+Refer to the [Build, test, document, and promote adapters](/guides/adapter-creation) guide for more information, particularly if you are an adapter maintainer considering having your adapter be added to the trusted list.
### Trusted vs Verified
diff --git a/website/docs/docs/verified-adapters.md b/website/docs/docs/verified-adapters.md
index 170bc8f885b..75c7529c247 100644
--- a/website/docs/docs/verified-adapters.md
+++ b/website/docs/docs/verified-adapters.md
@@ -11,7 +11,7 @@ These adapters then earn a "Verified" status so that users can have a certain le
The verification process serves as the on-ramp to integration with dbt Cloud. As such, we restrict applicants to data platform vendors with whom we are already engaged.
-To learn more, see [Verifying a new adapter](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter).
+To learn more, refer to the [Build, test, document, and promote adapters](/guides/adapter-creation) guide.
import MSCallout from '/snippets/_microsoft-adapters-soon.md';
diff --git a/website/docs/guides/adapter-creation.md b/website/docs/guides/adapter-creation.md
index cd18a413b10..6c9d575bae2 100644
--- a/website/docs/guides/adapter-creation.md
+++ b/website/docs/guides/adapter-creation.md
@@ -164,8 +164,7 @@ We strongly encourage you to adopt the following approach when versioning and re
This step will walk you through the first creating the necessary adapter classes and macros, and provide some resources to help you validate that your new adapter is working correctly. Make sure you've familiarized yourself with the previous steps in this guide.
-Once the adapter is passing most of the functional tests (see ["Testing a new adapter"](4-testing-a-new-adapter)
-), please let the community know that is available to use by adding the adapter to the ["Supported Data Platforms"](/docs/supported-data-platforms) page by following the steps given in [Documenting your adapter](/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter).
+Once the adapter is passing most of the functional tests in the previous "Testing a new adapter" step, please let the community know that is available to use by adding the adapter to the ["Supported Data Platforms"](/docs/supported-data-platforms) page by following the steps given in "Documenting your adapter.
For any questions you may have, don't hesitate to ask in the [#adapter-ecosystem](https://getdbt.slack.com/archives/C030A0UF5LM) Slack channel. The community is very helpful and likely has experienced a similar issue as you.
@@ -1294,7 +1293,7 @@ Essential functionality includes (but is not limited to the following features):
The adapter should have the required documentation for connecting and configuring the adapter. The dbt docs site should be the single source of truth for this information. These docs should be kept up-to-date.
-See [Documenting a new adapter](/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter) for more information.
+Proceed to the "Document a new adapter" step for more information.
### Release Cadence
diff --git a/website/docs/guides/dbt-models-on-databricks.md b/website/docs/guides/dbt-models-on-databricks.md
index d1a55915777..f26b7253be9 100644
--- a/website/docs/guides/dbt-models-on-databricks.md
+++ b/website/docs/guides/dbt-models-on-databricks.md
@@ -14,7 +14,7 @@ recently_updated: true
## Introduction
-Continuing our Databricks and dbt guide series from the last [guide](/guides/dbt-ecosystem/databricks-guides/how-to-set-up-your-databricks-dbt-project), it’s time to talk about performance optimization. In this follow-up post, we outline simple strategies to optimize for cost, performance, and simplicity when architecting your data pipelines. We’ve encapsulated these strategies in this acronym-framework:
+Building on the [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project) guide, we'd like to discuss performance optimization. In this follow-up post, we outline simple strategies to optimize for cost, performance, and simplicity when you architect data pipelines. We’ve encapsulated these strategies in this acronym-framework:
- Platform Components
- Patterns & Best Practices
@@ -177,6 +177,6 @@ With the [dbt Cloud Admin API](/docs/dbt-cloud-apis/admin-cloud-api), you can
### Conclusion
-This concludes the second guide in our series on “Working with Databricks and dbt”, following [How to set up your Databricks and dbt Project](/guides/dbt-ecosystem/databricks-guides/how-to-set-up-your-databricks-dbt-project).
+This builds on the content in [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project).
We welcome you to try these strategies on our example open source TPC-H implementation and to provide us with thoughts/feedback as you start to incorporate these features into production. Looking forward to your feedback on [#db-databricks-and-spark](https://getdbt.slack.com/archives/CNGCW8HKL) Slack channel!
diff --git a/website/docs/guides/productionize-your-dbt-databricks-project.md b/website/docs/guides/productionize-your-dbt-databricks-project.md
index fdadcef6d34..f26a132919b 100644
--- a/website/docs/guides/productionize-your-dbt-databricks-project.md
+++ b/website/docs/guides/productionize-your-dbt-databricks-project.md
@@ -18,10 +18,10 @@ Welcome to the third installment of our comprehensive series on optimizing and d
### Prerequisites
-If you don't have any of the following requirements, refer to the instructions in the [setup guide](/guides/dbt-ecosystem/databricks-guides/how-to-set-up-your-databricks-dbt-project) to catch up:
+If you don't have any of the following requirements, refer to the instructions in the [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project) for help meeting these requirements:
-- You have [set up your Databricks and dbt Cloud environments](/guides/dbt-ecosystem/databricks-guides/how-to-set-up-your-databricks-dbt-project).
-- You have [optimized your dbt models for peak performance](/guides/dbt-ecosystem/databricks-guides/how_to_optimize_dbt_models_on_databricks).
+- You have [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project).
+- You have [optimized your dbt models for peak performance](/guides/optimize-dbt-models-on-databricks).
- You have created two catalogs in Databricks: *dev* and *prod*.
- You have created Databricks Service Principal to run your production jobs.
- You have at least one [deployment environment](/docs/deploy/deploy-environments) in dbt Cloud.
@@ -52,7 +52,7 @@ Let’s [create a job](/docs/deploy/deploy-jobs#create-and-schedule-jobs) in dbt
1. Create a new job by clicking **Deploy** in the header, click **Jobs** and then **Create job**.
2. **Name** the job “Daily refresh”.
3. Set the **Environment** to your *production* environment.
- - This will allow the job to inherit the catalog, schema, credentials, and environment variables defined in the [setup guide](https://docs.getdbt.com/guides/dbt-ecosystem/databricks-guides/how-to-set-up-your-databricks-dbt-project#defining-your-dbt-deployment-environment).
+ - This will allow the job to inherit the catalog, schema, credentials, and environment variables defined in [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project).
4. Under **Execution Settings**
- Check the **Generate docs on run** checkbox to configure the job to automatically generate project docs each time this job runs. This will ensure your documentation stays evergreen as models are added and modified.
- Select the **Run on source freshness** checkbox to configure dbt [source freshness](/docs/deploy/source-freshness) as the first step of this job. Your sources will need to be configured to [snapshot freshness information](/docs/build/sources#snapshotting-source-data-freshness) for this to drive meaningful insights.
@@ -87,7 +87,7 @@ dbt allows you to write [tests](/docs/build/tests) for your data pipeline, which
2. **Development**: Running tests during development ensures that your code changes do not break existing assumptions, enabling developers to iterate faster by catching problems immediately after writing code.
3. **CI checks**: Automated CI jobs run and test your pipeline end-to end when a pull request is created, providing confidence to developers, code reviewers, and end users that the proposed changes are reliable and will not cause disruptions or data quality issues
-Your CI job will ensure that the models build properly and pass any tests applied to them. We recommend creating a separate *test* environment and having a dedicated service principal. This will ensure the temporary schemas created during CI tests are in their own catalog and cannot unintentionally expose data to other users. Repeat the [steps](/guides/dbt-ecosystem/databricks-guides/how-to-set-up-your-databricks-dbt-project) used to create your *prod* environment to create a *test* environment. After setup, you should have:
+Your CI job will ensure that the models build properly and pass any tests applied to them. We recommend creating a separate *test* environment and having a dedicated service principal. This will ensure the temporary schemas created during CI tests are in their own catalog and cannot unintentionally expose data to other users. Repeat the steps in [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project) to create your *prod* environment to create a *test* environment. After setup, you should have:
- A catalog called *test*
- A service principal called *dbt_test_sp*
@@ -130,13 +130,13 @@ The five key steps for troubleshooting dbt Cloud issues are:
3. Isolate the problem by running one model at a time in the IDE or undoing the code that caused the issue.
4. Check for problems in compiled files and logs.
-Consult the [Debugging errors documentation](/best-practices/debugging-errors) for a comprehensive list of error types and diagnostic methods.
+Consult the [Debugging errors documentation](/guides/debug-errors) for a comprehensive list of error types and diagnostic methods.
To troubleshoot issues with a dbt Cloud job, navigate to the "Deploy > Run History" tab in your dbt Cloud project and select the failed run. Then, expand the run steps to view [console and debug logs](/docs/deploy/run-visibility#access-logs) to review the detailed log messages. To obtain additional information, open the Artifacts tab and download the compiled files associated with the run.
If your jobs are taking longer than expected, use the [model timing](/docs/deploy/run-visibility#model-timing) dashboard to identify bottlenecks in your pipeline. Analyzing the time taken for each model execution helps you pinpoint the slowest components and optimize them for better performance. The Databricks [Query History](https://docs.databricks.com/sql/admin/query-history.html) lets you inspect granular details such as time spent in each task, rows returned, I/O performance, and execution plan.
-For more on performance tuning, see our guide on [How to Optimize and Troubleshoot dbt Models on Databricks](/guides/dbt-ecosystem/databricks-guides/how_to_optimize_dbt_models_on_databricks).
+For more on performance tuning, see our guide on [How to Optimize and Troubleshoot dbt Models on Databricks](/guides/optimize-dbt-models-on-databricks).
## Advanced considerations
@@ -160,7 +160,7 @@ To trigger your dbt Cloud job from Databricks, follow the instructions in our [D
## Data masking
-Our [Best Practices for dbt and Unity Catalog](/guides/dbt-ecosystem/databricks-guides/dbt-unity-catalog-best-practices) guide recommends using separate catalogs *dev* and *prod* for development and deployment environments, with Unity Catalog and dbt Cloud handling configurations and permissions for environment isolation. Ensuring security while maintaining efficiency in your development and deployment environments is crucial. Additional security measures may be necessary to protect sensitive data, such as personally identifiable information (PII).
+Our [Best Practices for dbt and Unity Catalog](/best-practices/dbt-unity-catalog-best-practices) guide recommends using separate catalogs *dev* and *prod* for development and deployment environments, with Unity Catalog and dbt Cloud handling configurations and permissions for environment isolation. Ensuring security while maintaining efficiency in your development and deployment environments is crucial. Additional security measures may be necessary to protect sensitive data, such as personally identifiable information (PII).
Databricks leverages [Dynamic Views](https://docs.databricks.com/data-governance/unity-catalog/create-views.html#create-a-dynamic-view) to enable data masking based on group membership. Because views in Unity Catalog use Spark SQL, you can implement advanced data masking by using more complex SQL expressions and regular expressions. You can now also apply fine grained access controls like row filters in preview and column masks in preview on tables in Databricks Unity Catalog, which will be the recommended approach to protect sensitive data once this goes GA. Additionally, in the near term, Databricks Unity Catalog will also enable Attribute Based Access Control natively, which will make protecting sensitive data at scale simpler.
diff --git a/website/docs/guides/set-up-ci.md b/website/docs/guides/set-up-ci.md
index c6bcf316952..83362094ec6 100644
--- a/website/docs/guides/set-up-ci.md
+++ b/website/docs/guides/set-up-ci.md
@@ -54,7 +54,7 @@ To be able to find modified nodes, dbt needs to have something to compare agains
### 3. Test your process
-That's it! There are other steps you can take to be even more confident in your work, such as [validating your structure follows best practices](/guides/orchestration/set-up-ci/run-dbt-project-evaluator) and [linting your code](/guides/orchestration/set-up-ci/lint-on-push), but this covers the most critical checks.
+That's it! There are other steps you can take to be even more confident in your work, such as validating your structure follows best practices and linting your code. For more information, refer to [Get started with Continuous Integration tests](/guides/set-up-ci).
To test your new flow, create a new branch in the dbt Cloud IDE then add a new file or modify an existing one. Commit it, then create a new Pull Request (not a draft). Within a few seconds, you’ll see a new check appear in your git provider.
@@ -313,7 +313,7 @@ The git flow will look like this:
### Advanced prerequisites
-- You have the **Development**, **CI**, and **Production** environments, as described in [the Baseline setup](/guides/orchestration/set-up-ci/in-15-minutes).
+- You have the **Development**, **CI**, and **Production** environments, as described in [the Baseline setup](/guides/set-up-ci).
### 1. Create a `release` branch in your git repo
@@ -350,6 +350,6 @@ Adding a regularly-scheduled job inside of the QA environment whose only command
### 5. Test your process
-When the Release Manager is ready to cut a new release, they will manually open a PR from `qa` into `main` from their git provider (e.g. GitHub, GitLab, Azure DevOps). dbt Cloud will detect the new PR, at which point the existing check in the CI environment will trigger and run. When using the [baseline configuration](/guides/orchestration/set-up-ci/in-15-minutes), it's possible to kick off the PR creation from inside of the dbt Cloud IDE. Under this paradigm, that button will create PRs targeting your QA branch instead.
+When the Release Manager is ready to cut a new release, they will manually open a PR from `qa` into `main` from their git provider (e.g. GitHub, GitLab, Azure DevOps). dbt Cloud will detect the new PR, at which point the existing check in the CI environment will trigger and run. When using the [baseline configuration](/guides/set-up-ci), it's possible to kick off the PR creation from inside of the dbt Cloud IDE. Under this paradigm, that button will create PRs targeting your QA branch instead.
To test your new flow, create a new branch in the dbt Cloud IDE then add a new file or modify an existing one. Commit it, then create a new Pull Request (not a draft) against your `qa` branch. You'll see the integration tests begin to run. Once they complete, manually create a PR against `main`, and within a few seconds you’ll see the tests run again but this time incorporating all changes from all code that hasn't been merged to main yet.
diff --git a/website/docs/guides/set-up-your-databricks-dbt-project.md b/website/docs/guides/set-up-your-databricks-dbt-project.md
index e40a4182423..d378b57cacc 100644
--- a/website/docs/guides/set-up-your-databricks-dbt-project.md
+++ b/website/docs/guides/set-up-your-databricks-dbt-project.md
@@ -68,7 +68,7 @@ We are not covering python in this post but if you want to learn more, check out
Now that the Databricks components are in place, we can configure our dbt project. This involves connecting dbt to our Databricks SQL warehouse to run SQL queries and using a version control system like GitHub to store our transformation code.
-If you are migrating an existing dbt project from the dbt-spark adapter to dbt-databricks, follow this [migration guide](https://docs.getdbt.com/guides/migration/tools/migrating-from-spark-to-databricks#migration) to switch adapters without needing to update developer credentials and other existing configs.
+If you are migrating an existing dbt project from the dbt-spark adapter to dbt-databricks, follow this [migration guide](/guides/migrate-from-spark-to-databricks) to switch adapters without needing to update developer credentials and other existing configs.
If you’re starting a new dbt project, follow the steps below. For a more detailed setup flow, check out our [quickstart guide.](/guides/databricks)
diff --git a/website/docs/reference/commands/init.md b/website/docs/reference/commands/init.md
index ac55717c0ec..e9cc2ccba4e 100644
--- a/website/docs/reference/commands/init.md
+++ b/website/docs/reference/commands/init.md
@@ -36,7 +36,7 @@ If you've just cloned or downloaded an existing dbt project, `dbt init` can stil
`dbt init` knows how to prompt for connection information by looking for a file named `profile_template.yml`. It will look for this file in two places:
-- **Adapter plugin:** What's the bare minumum Postgres profile? What's the type of each field, what are its defaults? This information is stored in a file called [`dbt/include/postgres/profile_template.yml`](https://github.com/dbt-labs/dbt-core/blob/main/plugins/postgres/dbt/include/postgres/profile_template.yml). If you're the maintainer of an adapter plugin, we highly recommend that you add a `profile_template.yml` to your plugin, too. See more details in [building-a-new-adapter](/guides/dbt-ecosystem/adapter-development/3-building-a-new-adapter).
+- **Adapter plugin:** What's the bare minumum Postgres profile? What's the type of each field, what are its defaults? This information is stored in a file called [`dbt/include/postgres/profile_template.yml`](https://github.com/dbt-labs/dbt-core/blob/main/plugins/postgres/dbt/include/postgres/profile_template.yml). If you're the maintainer of an adapter plugin, we highly recommend that you add a `profile_template.yml` to your plugin, too. Refer to the [Build, test, document, and promote adapters](/guides/adapter-creation) guide for more information.
- **Existing project:** If you're the maintainer of an existing project, and you want to help new users get connected to your database quickly and easily, you can include your own custom `profile_template.yml` in the root of your project, alongside `dbt_project.yml`. For common connection attributes, set the values in `fixed`; leave user-specific attributes in `prompts`, but with custom hints and defaults as you'd like.
diff --git a/website/docs/reference/events-logging.md b/website/docs/reference/events-logging.md
index 94b865fad0d..ffdeb7bb752 100644
--- a/website/docs/reference/events-logging.md
+++ b/website/docs/reference/events-logging.md
@@ -4,7 +4,7 @@ title: "Events and logs"
As dbt runs, it generates events. The most common way to see those events is as log messages, written in real time to two places:
- The command line terminal (`stdout`), to provide interactive feedback while running dbt.
-- The debug log file (`logs/dbt.log`), to enable detailed [debugging of errors](/best-practices/debugging-errors) when they occur. The text-formatted log messages in this file include all `DEBUG`-level events, as well as contextual information, such as log level and thread name. The location of this file can be configured via [the `log_path` config](/reference/project-configs/log-path).
+- The debug log file (`logs/dbt.log`), to enable detailed [debugging of errors](/guides/debug-errors) when they occur. The text-formatted log messages in this file include all `DEBUG`-level events, as well as contextual information, such as log level and thread name. The location of this file can be configured via [the `log_path` config](/reference/project-configs/log-path).
diff --git a/website/docs/reference/resource-configs/no-configs.md b/website/docs/reference/resource-configs/no-configs.md
index 5a4ba4eaaa2..5eec26917c8 100644
--- a/website/docs/reference/resource-configs/no-configs.md
+++ b/website/docs/reference/resource-configs/no-configs.md
@@ -8,4 +8,4 @@ If you were guided to this page from a data platform setup article, it most like
- Setting up the profile is the only action the end-user needs to take on the data platform, or
- The subsequent actions the end-user needs to take are not currently documented
-If you'd like to contribute to data platform-specifc configuration information, refer to [Documenting a new adapter](/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter)
\ No newline at end of file
+If you'd like to contribute to data platform-specific configuration information, refer to [Documenting a new adapter](/guides/adapter-creation)
From 26338c0f55b04465b346544e5d8cdf2e92155a31 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 12:19:23 -0800
Subject: [PATCH 19/59] fixing more links
---
.../materializations-guide-7-conclusion.md | 2 +-
website/docs/docs/cloud/billing.md | 2 +-
website/docs/docs/dbt-cloud-apis/sl-jdbc.md | 2 +-
.../dbt-versions/core-upgrade/08-upgrading-to-v1.0.md | 2 +-
.../release-notes/07-June-2023/product-docs-jun.md | 2 +-
.../release-notes/09-April-2023/product-docs.md | 8 ++++----
website/docs/docs/deploy/deployment-tools.md | 2 +-
website/docs/faqs/Models/available-materializations.md | 2 +-
website/docs/faqs/Project/why-not-write-dml.md | 2 +-
website/docs/faqs/Warehouse/db-connection-dbt-compile.md | 2 +-
website/docs/guides/dbt-python-snowpark.md | 4 ++--
website/docs/guides/migrate-from-spark-to-databricks.md | 2 +-
.../guides/productionize-your-dbt-databricks-project.md | 2 +-
website/docs/guides/refactoring-legacy-sql.md | 2 +-
14 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/website/docs/best-practices/materializations/materializations-guide-7-conclusion.md b/website/docs/best-practices/materializations/materializations-guide-7-conclusion.md
index c0c4e023a55..cd561716fe4 100644
--- a/website/docs/best-practices/materializations/materializations-guide-7-conclusion.md
+++ b/website/docs/best-practices/materializations/materializations-guide-7-conclusion.md
@@ -9,6 +9,6 @@ hoverSnippet: Read this conclusion to our guide on using materializations in dbt
You're now following best practices in your project, and have optimized the materializations of your DAG. You’re equipped with the 3 main materializations that cover almost any analytics engineering situation!
-There are more configs and materializations available, as well as specific materializations for certain platforms and adapters — and like everything with dbt, materializations are extensible, meaning you can create your own [custom materializations](/guides/creating-new-materializations) for your needs. So this is just the beginning of what you can do with these powerful configurations.
+There are more configs and materializations available, as well as specific materializations for certain platforms and adapters — and like everything with dbt, materializations are extensible, meaning you can create your own [custom materializations](/guides/create-new-materializations) for your needs. So this is just the beginning of what you can do with these powerful configurations.
For the vast majority of users and companies though, tables, views, and incremental models will handle everything you can throw at them. Develop your intuition and expertise for these materializations, and you’ll be well on your way to tackling advanced analytics engineering problems.
diff --git a/website/docs/docs/cloud/billing.md b/website/docs/docs/cloud/billing.md
index 6853cc0004b..f66e2aad363 100644
--- a/website/docs/docs/cloud/billing.md
+++ b/website/docs/docs/cloud/billing.md
@@ -215,7 +215,7 @@ If you want to ensure that you're building views whenever the logic is changed,
Executing `dbt build` in this context is unnecessary because the CI job was used to both run and test the code that just got merged into main.
5. Under the **Execution Settings**, select the default production job to compare changes against:
- **Defer to a previous run state** — Select the “Merge Job” you created so the job compares and identifies what has changed since the last merge.
-6. In your dbt project, follow the steps in [Run a dbt Cloud job on merge](/guides/orchestration/custom-cicd-pipelines/3-dbt-cloud-job-on-merge) to create a script to trigger the dbt Cloud API to run your job after a merge happens within your git repository or watch this [video](https://www.loom.com/share/e7035c61dbed47d2b9b36b5effd5ee78?sid=bcf4dd2e-b249-4e5d-b173-8ca204d9becb).
+6. In your dbt project, follow the steps in Run a dbt Cloud job on merge in the [Customizing CI/CD with Custom Pipelines](/guides/custom-cicd-pipelines) guide to create a script to trigger the dbt Cloud API to run your job after a merge happens within your git repository or watch this [video](https://www.loom.com/share/e7035c61dbed47d2b9b36b5effd5ee78?sid=bcf4dd2e-b249-4e5d-b173-8ca204d9becb).
The purpose of the merge job is to:
diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
index e10d057dc75..931666dd10c 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
@@ -363,5 +363,5 @@ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
## Related docs
-- [dbt Semantic Layer integration best practices](/guides/dbt-ecosystem/sl-partner-integration-guide)
+- [dbt Semantic Layer integration best practices](/guides/sl-partner-integration-guide)
diff --git a/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md b/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md
index 543368b873a..3f45e44076c 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md
@@ -51,7 +51,7 @@ Global project macros have been reorganized, and some old unused macros have bee
### For users of adapter plugins
-- **BigQuery:** Support for [ingestion-time-partitioned tables](/guides/legacy/creating-date-partitioned-tables) has been officially deprecated in favor of modern approaches. Use `partition_by` and incremental modeling strategies instead.
+- **BigQuery:** Support for ingestion-time-partitioned tables has been officially deprecated in favor of modern approaches. Use `partition_by` and incremental modeling strategies instead. For more information, refer to [Incremental models](/docs/build/incremental-models).
### For maintainers of plugins + other integrations
diff --git a/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md b/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
index 469d2ac362b..7a474cc091f 100644
--- a/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
+++ b/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
@@ -32,4 +32,4 @@ Here's what's new to [docs.getdbt.com](http://docs.getdbt.com/) in June:
## New 📚 Guides, ✏️ blog posts, and FAQs
-- Add an Azure DevOps example to the [Customizing CI/CD guide](/guides/orchestration/custom-cicd-pipelines/3-dbt-cloud-job-on-merge).
+- Add an Azure DevOps example to the in the [Customizing CI/CD with Custom Pipelines](/guides/custom-cicd-pipelines) guide.
diff --git a/website/docs/docs/dbt-versions/release-notes/09-April-2023/product-docs.md b/website/docs/docs/dbt-versions/release-notes/09-April-2023/product-docs.md
index 5082699619b..3de29b605ce 100644
--- a/website/docs/docs/dbt-versions/release-notes/09-April-2023/product-docs.md
+++ b/website/docs/docs/dbt-versions/release-notes/09-April-2023/product-docs.md
@@ -31,10 +31,10 @@ Hello from the dbt Docs team: @mirnawong1, @matthewshaver, @nghi-ly, and @runleo
## New 📚 Guides and ✏️ blog posts
-- [Use Databricks workflows to run dbt Cloud jobs](/guides/orchestration/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs)
+- [Use Databricks workflows to run dbt Cloud jobs](/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs)
- [Refresh Tableau workbook with extracts after a job finishes](/guides/zapier-refresh-tableau-workbook)
- [dbt Python Snowpark workshop/tutorial](/guides/dbt-python-snowpark)
- [How to optimize and troubleshoot dbt Models on Databricks](/guides/optimize-dbt-models-on-databricks)
-- [The missing guide to debug() in dbt](https://docs.getdbt.com/blog/guide-to-jinja-debug)
-- [dbt Squared: Leveraging dbt Core and dbt Cloud together at scale](https://docs.getdbt.com/blog/dbt-squared)
-- [Audit_helper in dbt: Bringing data auditing to a higher level](https://docs.getdbt.com/blog/audit-helper-for-migration)
+- [The missing guide to debug() in dbt](/blog/guide-to-jinja-debug)
+- [dbt Squared: Leveraging dbt Core and dbt Cloud together at scale](/blog/dbt-squared)
+- [Audit_helper in dbt: Bringing data auditing to a higher level](/blog/audit-helper-for-migration)
diff --git a/website/docs/docs/deploy/deployment-tools.md b/website/docs/docs/deploy/deployment-tools.md
index 3b2da778a53..cca2368f38a 100644
--- a/website/docs/docs/deploy/deployment-tools.md
+++ b/website/docs/docs/deploy/deployment-tools.md
@@ -126,7 +126,7 @@ Cron is a decent way to schedule bash commands. However, while it may seem like
Use Databricks workflows to call the dbt Cloud job API, which has several benefits such as integration with other ETL processes, utilizing dbt Cloud job features, separation of concerns, and custom job triggering based on custom conditions or logic. These advantages lead to more modularity, efficient debugging, and flexibility in scheduling dbt Cloud jobs.
-For more info, refer to the guide on [Databricks workflows and dbt Cloud jobs](/guides/orchestration/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs).
+For more info, refer to the guide on [Databricks workflows and dbt Cloud jobs](/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs).
diff --git a/website/docs/faqs/Models/available-materializations.md b/website/docs/faqs/Models/available-materializations.md
index fcb3e3a9d26..bf11c92b595 100644
--- a/website/docs/faqs/Models/available-materializations.md
+++ b/website/docs/faqs/Models/available-materializations.md
@@ -8,4 +8,4 @@ id: available-materializations
dbt ships with five materializations: `view`, `table`, `incremental`, `ephemeral` and `materialized_view`.
Check out the documentation on [materializations](/docs/build/materializations) for more information on each of these options.
-You can also create your own [custom materializations](/guides/creating-new-materializations), if required however this is an advanced feature of dbt.
+You can also create your own [custom materializations](/guides/create-new-materializations), if required however this is an advanced feature of dbt.
diff --git a/website/docs/faqs/Project/why-not-write-dml.md b/website/docs/faqs/Project/why-not-write-dml.md
index 349fc2c5c74..210ef4a916d 100644
--- a/website/docs/faqs/Project/why-not-write-dml.md
+++ b/website/docs/faqs/Project/why-not-write-dml.md
@@ -30,4 +30,4 @@ You can test your models, generate documentation, create snapshots, and more!
SQL dialects tend to diverge the most in DML and DDL (rather than in `select` statements) — check out the example [here](/faqs/models/sql-dialect). By writing less SQL, it can make a migration to a new database technology easier.
-If you do need to write custom DML, there are ways to do this in dbt using [custom materializations](/guides/creating-new-materializations).
+If you do need to write custom DML, there are ways to do this in dbt using [custom materializations](/guides/create-new-materializations).
diff --git a/website/docs/faqs/Warehouse/db-connection-dbt-compile.md b/website/docs/faqs/Warehouse/db-connection-dbt-compile.md
index be46f1a1d8c..8017da4545b 100644
--- a/website/docs/faqs/Warehouse/db-connection-dbt-compile.md
+++ b/website/docs/faqs/Warehouse/db-connection-dbt-compile.md
@@ -22,7 +22,7 @@ To generate the compiled SQL for many models, dbt needs to run introspective que
These introspective queries include:
-- Populating the [relation cache](/guides/creating-new-materializations#update-the-relation-cache). Caching speeds up the metadata checks, including whether an [incremental model](/docs/build/incremental-models) already exists in the data platform.
+- Populating the relation cache. For more information, refer to the [Create new materializations](/guides/create-new-materializations) guide. Caching speeds up the metadata checks, including whether an [incremental model](/docs/build/incremental-models) already exists in the data platform.
- Resolving [macros](/docs/build/jinja-macros#macros), such as `run_query` or `dbt_utils.get_column_values` that you're using to template out your SQL. This is because dbt needs to run those queries during model SQL compilation.
Without a data platform connection, dbt can't perform these introspective queries and won't be able to generate the compiled SQL needed for the next steps in the dbt workflow. You can [`parse`](/reference/commands/parse) a project and use the [`list`](/reference/commands/list) resources in the project, without an internet or data platform connection. Parsing a project is enough to produce a [manifest](/reference/artifacts/manifest-json), however, keep in mind that the written-out manifest won't include compiled SQL.
diff --git a/website/docs/guides/dbt-python-snowpark.md b/website/docs/guides/dbt-python-snowpark.md
index 8417ec9177b..35842eb8d91 100644
--- a/website/docs/guides/dbt-python-snowpark.md
+++ b/website/docs/guides/dbt-python-snowpark.md
@@ -932,7 +932,7 @@ By now, we are pretty good at creating new files in the correct directories so w
select * from int_results
```
-1. Create a *Markdown* file `intermediate.md` that we will go over in depth during the [Testing](/guides/dbt-ecosystem/dbt-python-snowpark/13-testing) and [Documentation](/guides/dbt-ecosystem/dbt-python-snowpark/14-documentation) sections.
+1. Create a *Markdown* file `intermediate.md` that we will go over in depth in the Test and Documentation sections of the [Leverage dbt Cloud to generate analytics and ML-ready pipelines with SQL and Python with Snowflake](/guides/dbt-python-snowpark) guide.
```markdown
# the intent of this .md is to allow for multi-line long form explanations for our intermediate transformations
@@ -947,7 +947,7 @@ By now, we are pretty good at creating new files in the correct directories so w
{% docs int_lap_times_years %} Lap times are done per lap. We need to join them out to the race year to understand yearly lap time trends. {% enddocs %}
```
-1. Create a *YAML* file `intermediate.yml` that we will go over in depth during the [Testing](/guides/dbt-ecosystem/dbt-python-snowpark/13-testing) and [Documentation](/guides/dbt-ecosystem/dbt-python-snowpark/14-documentation) sections.
+1. Create a *YAML* file `intermediate.yml` that we will go over in depth during the Test and Document sections of the [Leverage dbt Cloud to generate analytics and ML-ready pipelines with SQL and Python with Snowflake](/guides/dbt-python-snowpark) guide.
```yaml
version: 2
diff --git a/website/docs/guides/migrate-from-spark-to-databricks.md b/website/docs/guides/migrate-from-spark-to-databricks.md
index 5be1c08d787..b249021ed50 100644
--- a/website/docs/guides/migrate-from-spark-to-databricks.md
+++ b/website/docs/guides/migrate-from-spark-to-databricks.md
@@ -14,7 +14,7 @@ recently_updated: true
## Introduction
-You can [migrate your projects](#migrate-your-dbt-projects) from using the `dbt-spark` adapter to using the [dbt-databricks adapter](https://github.com/databricks/dbt-databricks). In collaboration with dbt Labs, Databricks built this adapter using dbt-spark as the foundation and added some critical improvements. With it, you get an easier set up — requiring only three inputs for authentication — and more features such as support for [Unity Catalog](https://www.databricks.com/product/unity-catalog).
+You can migrate your projects from using the `dbt-spark` adapter to using the [dbt-databricks adapter](https://github.com/databricks/dbt-databricks). In collaboration with dbt Labs, Databricks built this adapter using dbt-spark as the foundation and added some critical improvements. With it, you get an easier set up — requiring only three inputs for authentication — and more features such as support for [Unity Catalog](https://www.databricks.com/product/unity-catalog).
### Prerequisites
diff --git a/website/docs/guides/productionize-your-dbt-databricks-project.md b/website/docs/guides/productionize-your-dbt-databricks-project.md
index f26a132919b..2c6a436a15b 100644
--- a/website/docs/guides/productionize-your-dbt-databricks-project.md
+++ b/website/docs/guides/productionize-your-dbt-databricks-project.md
@@ -156,7 +156,7 @@ Inserting dbt Cloud jobs into a Databricks Workflows allows you to chain togethe
- Logs and Run History: Accessing logs and run history becomes more convenient when using dbt Cloud.
- Monitoring and Notification Features: dbt Cloud comes equipped with monitoring and notification features like the ones described above that can help you stay informed about the status and performance of your jobs.
-To trigger your dbt Cloud job from Databricks, follow the instructions in our [Databricks Workflows to run dbt Cloud jobs guide](/guides/orchestration/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs).
+To trigger your dbt Cloud job from Databricks, follow the instructions in our [Databricks Workflows to run dbt Cloud jobs guide](/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs).
## Data masking
diff --git a/website/docs/guides/refactoring-legacy-sql.md b/website/docs/guides/refactoring-legacy-sql.md
index 09fcb9aaf82..a339e523020 100644
--- a/website/docs/guides/refactoring-legacy-sql.md
+++ b/website/docs/guides/refactoring-legacy-sql.md
@@ -31,7 +31,7 @@ When migrating and refactoring code, it’s of course important to stay organize
Let's get into it!
:::info More resources
-This guide is excerpted from the new dbt Learn On-demand Course, "Refactoring SQL for Modularity" - if you're curious, pick up the [free refactoring course here](https://courses.getdbt.com/courses/refactoring-sql-for-modularity), which includes example and practice refactoring projects. Or for a more in-depth look at migrating DDL and DML from stored procedures check out [this guide](/guides/migration/tools/migrating-from-stored-procedures/1-migrating-from-stored-procedures).
+This guide is excerpted from the new dbt Learn On-demand Course, "Refactoring SQL for Modularity" - if you're curious, pick up the [free refactoring course here](https://courses.getdbt.com/courses/refactoring-sql-for-modularity), which includes example and practice refactoring projects. Or for a more in-depth look at migrating DDL and DML from stored procedures, refer to the[Migrate from stored procedures](/guides/migrate-from-stored-procedures) guide.
:::
## Migrate your existing SQL code
From bab0aa6f5d2df66bfc0a81a90e5fd071ec19d55f Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 13:32:22 -0800
Subject: [PATCH 20/59] fix a link o'rama
---
.../11-Older versions/upgrading-to-0-15-0.md | 2 +-
website/docs/guides/adapter-creation.md | 10 +++++-----
.../docs/guides/set-up-your-databricks-dbt-project.md | 4 ++--
website/snippets/dbt-databricks-for-databricks.md | 4 ++--
4 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md
index 8259e66fa46..98248a1caa5 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md
@@ -26,7 +26,7 @@ expect this field will now return errors. See the latest
### Custom materializations
-All materializations must now [manage dbt's Relation cache](/guides/creating-new-materializations#update-the-relation-cache).
+All materializations must now manage dbt's Relation cache. For more information, refer to [Create new materializations](/guides/creating-new-materializations).
### dbt Server
diff --git a/website/docs/guides/adapter-creation.md b/website/docs/guides/adapter-creation.md
index 6c9d575bae2..8a9145f0258 100644
--- a/website/docs/guides/adapter-creation.md
+++ b/website/docs/guides/adapter-creation.md
@@ -260,7 +260,7 @@ class MyAdapterCredentials(Credentials):
There are a few things you can do to make it easier for users when connecting to your database:
- Be sure to implement the Credentials' `_connection_keys` method shown above. This method will return the keys that should be displayed in the output of the `dbt debug` command. As a general rule, it's good to return all the arguments used in connecting to the actual database except the password (even optional arguments).
-- Create a `profile_template.yml` to enable configuration prompts for a brand-new user setting up a connection profile via the [`dbt init` command](/reference/commands/init). See more details [below](#other-files).
+- Create a `profile_template.yml` to enable configuration prompts for a brand-new user setting up a connection profile via the [`dbt init` command](/reference/commands/init). You will find more details in the following steps.
- You may also want to define an `ALIASES` mapping on your Credentials class to include any config names you want users to be able to use in place of 'database' or 'schema'. For example if everyone using the MyAdapter database calls their databases "collections", you might do:
@@ -574,8 +574,8 @@ Previously, we offered a packaged suite of tests for dbt adapter functionality:
This document has two sections:
-1. "[About the testing framework](#about-the-testing-framework)" describes the standard framework that we maintain for using pytest together with dbt. It includes an example that shows the anatomy of a simple test case.
-2. "[Testing your adapter](#testing-your-adapter)" offers a step-by-step guide for using our out-of-the-box suite of "basic" tests, which will validate that your adapter meets a baseline of dbt functionality.
+1. Refer to "About the testing framework" for a description of the standard framework that we maintain for using pytest together with dbt. It includes an example that shows the anatomy of a simple test case.
+2. Refer to "Testing your adapter" for a step-by-step guide for using our out-of-the-box suite of "basic" tests, which will validate that your adapter meets a baseline of dbt functionality.
### Testing prerequisites
@@ -1067,7 +1067,7 @@ python3 -m pytest tests/functional --profile databricks_sql_endpoint
## Document a new adapter
-If you've already [built](3-building-a-new-adapter), and [tested](4-testing-a-new-adapter) your adapter, it's time to document it so the dbt community will know that it exists and how to use it.
+If you've already built, and tested your adapter, it's time to document it so the dbt community will know that it exists and how to use it.
### Making your adapter available
@@ -1264,7 +1264,7 @@ There has been a tendency to trust the dbt Labs-maintained adapters over communi
The adapter verification program aims to quickly indicate to users which adapters can be trusted to use in production. Previously, doing so was uncharted territory for new users and complicated making the business case to their leadership team. We plan to give quality assurances by:
1. appointing a key stakeholder for the adapter repository,
-2. ensuring that the chosen stakeholder fixes bugs and cuts new releases in a timely manner see maintainer your adapter (["Maintaining your new adapter"](2-prerequisites-for-a-new-adapter#maintaining-your-new-adapter)),
+2. ensuring that the chosen stakeholder fixes bugs and cuts new releases in a timely manner. Refer to the "Maintaining your new adapter" step for more information.
3. demonstrating that it passes our adapter pytest suite tests,
4. assuring that it works for us internally and ideally an existing team using the adapter in production .
diff --git a/website/docs/guides/set-up-your-databricks-dbt-project.md b/website/docs/guides/set-up-your-databricks-dbt-project.md
index d378b57cacc..c47895f7246 100644
--- a/website/docs/guides/set-up-your-databricks-dbt-project.md
+++ b/website/docs/guides/set-up-your-databricks-dbt-project.md
@@ -46,7 +46,7 @@ Service principals are used to remove humans from deploying to production for co
[Let’s create a service principal](https://docs.databricks.com/administration-guide/users-groups/service-principals.html#add-a-service-principal-to-your-databricks-account) in Databricks:
1. Have your Databricks Account admin [add a service principal](https://docs.databricks.com/administration-guide/users-groups/service-principals.html#add-a-service-principal-to-your-databricks-account) to your account. The service principal’s name should differentiate itself from a user ID and make its purpose clear (eg dbt_prod_sp).
-2. Add the service principal added to any groups it needs to be a member of at this time. There are more details on permissions in our ["Unity Catalog best practices" guide](dbt-unity-catalog-best-practices).
+2. Add the service principal added to any groups it needs to be a member of at this time. There are more details on permissions in our ["Unity Catalog best practices" guide](/best-practices/dbt-unity-catalog-best-practices).
3. [Add the service principal to your workspace](https://docs.databricks.com/administration-guide/users-groups/service-principals.html#add-a-service-principal-to-a-workspace) and apply any [necessary entitlements](https://docs.databricks.com/administration-guide/users-groups/service-principals.html#add-a-service-principal-to-a-workspace-using-the-admin-console), such as Databricks SQL access and Workspace access.
## Setting up Databricks Compute
@@ -113,4 +113,4 @@ Next, you’ll need somewhere to store and version control your code that allows
### Next steps
-Now that your project is configured, you can start transforming your Databricks data with dbt. To help you scale efficiently, we recommend you follow our best practices, starting with the [Unity Catalog best practices](/best-practices/dbt-unity-catalog-best-practices), then you can [Optimize dbt models on Databricks](/guides/how_to_optimize_dbt_models_on_databricks) .
+Now that your project is configured, you can start transforming your Databricks data with dbt. To help you scale efficiently, we recommend you follow our best practices, starting with the [Unity Catalog best practices](/best-practices/dbt-unity-catalog-best-practices), then you can [Optimize dbt models on Databricks](/guides/optimize-dbt-models-on-databricks).
diff --git a/website/snippets/dbt-databricks-for-databricks.md b/website/snippets/dbt-databricks-for-databricks.md
index 930e7a85a9f..f1c5ec84af1 100644
--- a/website/snippets/dbt-databricks-for-databricks.md
+++ b/website/snippets/dbt-databricks-for-databricks.md
@@ -1,4 +1,4 @@
:::info If you're using Databricks, use `dbt-databricks`
If you're using Databricks, the `dbt-databricks` adapter is recommended over `dbt-spark`.
-If you're still using dbt-spark with Databricks consider [migrating from the dbt-spark adapter to the dbt-databricks adapter](/guides/migration/tools/migrating-from-spark-to-databricks#migrate-your-dbt-projects).
-:::
\ No newline at end of file
+If you're still using dbt-spark with Databricks consider [migrating from the dbt-spark adapter to the dbt-databricks adapter](/guides/migrate-from-spark-to-databricks).
+:::
From 550d33340d59f994c2859595130ba0e05f11ed7a Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 13:35:31 -0800
Subject: [PATCH 21/59] fix a link o'rama
---
.../core-upgrade/11-Older versions/upgrading-to-0-15-0.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md
index 98248a1caa5..7009dc2d088 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md
@@ -26,7 +26,7 @@ expect this field will now return errors. See the latest
### Custom materializations
-All materializations must now manage dbt's Relation cache. For more information, refer to [Create new materializations](/guides/creating-new-materializations).
+All materializations must now manage dbt's Relation cache. For more information, refer to [Create new materializations](/guides/creatie-new-materializations).
### dbt Server
From 78f22b5ea4e76a45f163e8be89cf490746404db5 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 13:45:31 -0800
Subject: [PATCH 22/59] fix a typo
---
.../core-upgrade/11-Older versions/upgrading-to-0-15-0.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md
index 7009dc2d088..5eba212590f 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-15-0.md
@@ -26,7 +26,7 @@ expect this field will now return errors. See the latest
### Custom materializations
-All materializations must now manage dbt's Relation cache. For more information, refer to [Create new materializations](/guides/creatie-new-materializations).
+All materializations must now manage dbt's Relation cache. For more information, refer to [Create new materializations](/guides/create-new-materializations).
### dbt Server
From 07f9a241e06081d848ca20968596bd118bedc7f1 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 15:12:43 -0800
Subject: [PATCH 23/59] adding the forwarders
---
website/vercel.json | 490 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 490 insertions(+)
diff --git a/website/vercel.json b/website/vercel.json
index 3c2c0c6e3ce..7c054b0947e 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -2,6 +2,496 @@
"cleanUrls": true,
"trailingSlash": false,
"redirects": [
+ {
+ "source": "/guides/advanced/creating-new-materializations",
+ "destination": "/guides/create-new-materializations",
+ "permanent": true
+ },
+ {
+ "source": "/guides/advanced/using-jinja",
+ "destination": "/guides/using-jinja",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices",
+ "destination": "/best-practices",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/debugging-errors",
+ "destination": "/guides/debug-errors",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-build-our-metrics/semantic-layer-1-intro",
+ "destination": "/best-practices/how-we-build-our-metrics/semantic-layer-1-intro",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-build-our-metrics/semantic-layer-2-setup",
+ "destination": "/best-practices/how-we-build-our-metrics/semantic-layer-2-setup",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models",
+ "destination": "/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics",
+ "destination": "/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart",
+ "destination": "/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics",
+ "destination": "/guides/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion",
+ "destination": "/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-structure/1-guide-overview",
+ "destination": "/best-practices/how-we-structure/1-guide-overview",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-structure/2-staging",
+ "destination": "/best-practices/how-we-structure/2-staging",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-structure/3-intermediate",
+ "destination": "/best-practices/how-we-structure/3-intermediate",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-structure/4-marts",
+ "destination": "/best-practices/how-we-structure/4-marts",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-structure/5-semantic-layer-marts",
+ "destination": "/best-practices/how-we-structure/5-semantic-layer-marts",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-structure/6-the-rest-of-the-project",
+ "destination": "/best-practices/how-we-structure/6-the-rest-of-the-project",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-style/0-how-we-style-our-dbt-projects",
+ "destination": "/best-practices/how-we-style/0-how-we-style-our-dbt-projects",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-style/1-how-we-style-our-dbt-models",
+ "destination": "/best-practices/how-we-style/1-how-we-style-our-dbt-models",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-style/2-how-we-style-our-sql",
+ "destination": "/best-practices/how-we-style/2-how-we-style-our-sql",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-style/3-how-we-style-our-python",
+ "destination": "/best-practices/how-we-style/3-how-we-style-our-python",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-style/4-how-we-style-our-jinja",
+ "destination": "/best-practices/how-we-style/4-how-we-style-our-jinja",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-style/5-how-we-style-our-yaml",
+ "destination": "/best-practices/how-we-style/5-how-we-style-our-yaml",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/how-we-style/6-how-we-style-conclusion",
+ "destination": "/best-practices/how-we-style/6-how-we-style-conclusion",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/materializations/1-guide-overview",
+ "destination": "/best-practices/materializations/1-guide-overview",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/materializations/2-available-materializations",
+ "destination": "/best-practices/materializations/2-available-materializations",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/materializations/3-configuring-materializations",
+ "destination": "/best-practices/materializations/3-configuring-materializations",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/materializations/4-incremental-models",
+ "destination": "/best-practices/materializations/4-incremental-models",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/materializations/5-best-practices",
+ "destination": "/best-practices/materializations/5-best-practices",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/materializations/6-examining-builds",
+ "destination": "/best-practices/materializations/6-examining-builds",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/materializations/7-conclusion",
+ "destination": "/best-practices/materializations/7-conclusion",
+ "permanent": true
+ },
+ {
+ "source": "/guides/best-practices/writing-custom-generic-tests",
+ "destination": "/best-practices/writing-custom-generic-tests",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem",
+ "destination": "/guides",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/adapter-development/1-what-are-adapters",
+ "destination": "/guides/adapter-creation",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/adapter-development/2-prerequisites-for-a-new-adapter",
+ "destination": "/guides/adapter-creation",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/adapter-development/3-building-a-new-adapter",
+ "destination": "/guides/adapter-creation",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/adapter-development/4-testing-a-new-adapter",
+ "destination": "/guides/adapter-creation",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/adapter-development/5-documenting-a-new-adapter",
+ "destination": "/guides/adapter-creation",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/adapter-development/6-promoting-a-new-adapter",
+ "destination": "/guides/adapter-creation",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter",
+ "destination": "/guides/adapter-creation",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/databricks-guides/dbt-unity-catalog-best-practices",
+ "destination": "/best-practices/dbt-unity-catalog-best-practices",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/databricks-guides/how_to_optimize_dbt_models_on_databricks",
+ "destination": "/guides/optimize-dbt-models-on-databricks",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/databricks-guides/how-to-set-up-your-databricks-dbt-project",
+ "destination": "/guides/set-up-your-databricks-dbt-project",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/databricks-guides/productionizing-your-dbt-databricks-project",
+ "destination": "/guides/productionize-your-dbt-databricks-project",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/1-overview-dbt-python-snowpark",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/10-python-transformations",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/11-machine-learning-prep",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/12-machine-learning-training-prediction",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/13-testing",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/14-documentation",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/15-deployment",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/2-snowflake-configuration",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/3-connect-to-data-source",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/4-configure-dbt",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/5-development-schema-name",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/6-foundational-structure",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/7-folder-structure",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/8-sources-and-staging",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/dbt-python-snowpark/9-sql-transformations",
+ "destination": "/guides/dbt-python-snowpark",
+ "permanent": true
+ },
+ {
+ "source": "/guides/dbt-ecosystem/sl-partner-integration-guide",
+ "destination": "/guides/sl-partner-integration-guide",
+ "permanent": true
+ },
+ {
+ "source": "/guides/legacy/best-practices",
+ "destination": "/best-practices/best-practice-workflows",
+ "permanent": true
+ },
+ {
+ "source": "/guides/legacy/building-packages",
+ "destination": "/guides/building-packages",
+ "permanent": true
+ },
+ {
+ "source": "/guides/legacy/creating-date-partitioned-tables",
+ "destination": "/docs/build/incremental-models",
+ "permanent": true
+ },
+ {
+ "source": "/guides/legacy/debugging-schema-names",
+ "destination": "/guides/debug-schema-names",
+ "permanent": true
+ },
+ {
+ "source": "/guides/legacy/videos",
+ "destination": "/guides",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/sl-migration",
+ "destination": "/guides/sl-migration",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/tools",
+ "destination": "/guides",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/tools/migrating-from-spark-to-databricks",
+ "destination": "/guides/migrate-from-spark-to-databricks",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/tools/migrating-from-stored-procedures/1-migrating-from-stored-procedures",
+ "destination": "/guides/migrate-from-stored-procedures",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/tools/migrating-from-stored-procedures/2-inserts",
+ "destination": "/guides/migrate-from-stored-procedures",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/tools/migrating-from-stored-procedures/3-updates",
+ "destination": "/guides/migrate-from-stored-procedures",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/tools/migrating-from-stored-procedures/4-deletes",
+ "destination": "/guides/migrate-from-stored-procedures",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/tools/migrating-from-stored-procedures/5-merges",
+ "destination": "/guides/migrate-from-stored-procedures",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/tools/migrating-from-stored-procedures/6-migrating-from-stored-procedures-conclusion",
+ "destination": "/guides/migrate-from-stored-procedures",
+ "permanent": true
+ },
+ {
+ "source": "/guides/migration/tools/refactoring-legacy-sql",
+ "destination": "/guides/refactoring-legacy-sql",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration",
+ "destination": "/guides",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/airflow-and-dbt-cloud/1-airflow-and-dbt-cloud",
+ "destination": "/guides/airflow-and-dbt-cloud",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/airflow-and-dbt-cloud/2-setting-up-airflow-and-dbt-cloud",
+ "destination": "/guides/airflow-and-dbt-cloud",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/airflow-and-dbt-cloud/3-running-airflow-and-dbt-cloud",
+ "destination": "/guides/airflow-and-dbt-cloud",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/airflow-and-dbt-cloud/4-airflow-and-dbt-cloud-faqs",
+ "destination": "/guides/airflow-and-dbt-cloud",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/custom-cicd-pipelines/1-cicd-background",
+ "destination": "/guides/custom-cicd-pipelines",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/custom-cicd-pipelines/3-dbt-cloud-job-on-merge",
+ "destination": "/guides/custom-cicd-pipelines",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/custom-cicd-pipelines/4-dbt-cloud-job-on-pr",
+ "destination": "/guides/custom-cicd-pipelines",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/custom-cicd-pipelines/5-something-to-consider",
+ "destination": "/guides/custom-cicd-pipelines",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs",
+ "destination": "/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/set-up-ci/in-15-minutes",
+ "destination": "/guides/set-up-ci",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/set-up-ci/lint-on-push",
+ "destination": " /guides/set-up-ci",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/set-up-ci/multiple-environments",
+ "destination": "/guides/set-up-ci",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/set-up-ci/overview",
+ "destination": "/guides/set-up-ci",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/set-up-ci/run-dbt-project-evaluator",
+ "destination": "/guides/set-up-ci",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/webhooks",
+ "destination": "/guides",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/webhooks/serverless-datadog",
+ "destination": "/guides/serverless-datadog",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/webhooks/serverless-pagerduty",
+ "destination": "/guides/serverless-pagerduty",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/webhooks/zapier-ms-teams",
+ "destination": "/guides/zapier-ms-team",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/webhooks/zapier-new-cloud-job",
+ "destination": "/guides/zapier-new-cloud-job",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/webhooks/zapier-refresh-mode-report",
+ "destination": "/guides/zapier-refresh-mode-report",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/webhooks/zapier-refresh-tableau-workbook",
+ "destination": "/guides/zapier-refresh-tableau-workbook",
+ "permanent": true
+ },
+ {
+ "source": "/guides/orchestration/webhooks/zapier-slack",
+ "destination": "/guides/zapier-slack",
+ "permanent": true
+ },
{
"source": "/faqs/Project/docs-for-multiple-projects",
"destination": "/docs/collaborate/explore-projects#about-project-level-lineage",
From ea3f927e8ea037d7fc0a92582b1252c0d454e9a3 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 17:04:12 -0800
Subject: [PATCH 24/59] Update
website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com>
---
.../dbt-versions/release-notes/07-June-2023/product-docs-jun.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md b/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
index 7a474cc091f..4ead401a759 100644
--- a/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
+++ b/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
@@ -32,4 +32,4 @@ Here's what's new to [docs.getdbt.com](http://docs.getdbt.com/) in June:
## New 📚 Guides, ✏️ blog posts, and FAQs
-- Add an Azure DevOps example to the in the [Customizing CI/CD with Custom Pipelines](/guides/custom-cicd-pipelines) guide.
+- Add an Azure DevOps example in the [Customizing CI/CD with Custom Pipelines](/guides/custom-cicd-pipelines) guide.
From a68ca2e655ead85d6b0be20faa704c27c31eb253 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 17:08:45 -0800
Subject: [PATCH 25/59] Update website/docs/guides/dbt-models-on-databricks.md
Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com>
---
website/docs/guides/dbt-models-on-databricks.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/guides/dbt-models-on-databricks.md b/website/docs/guides/dbt-models-on-databricks.md
index f26b7253be9..489a3c28467 100644
--- a/website/docs/guides/dbt-models-on-databricks.md
+++ b/website/docs/guides/dbt-models-on-databricks.md
@@ -14,7 +14,7 @@ recently_updated: true
## Introduction
-Building on the [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project) guide, we'd like to discuss performance optimization. In this follow-up post, we outline simple strategies to optimize for cost, performance, and simplicity when you architect data pipelines. We’ve encapsulated these strategies in this acronym-framework:
+Building on the [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project) guide, we'd like to discuss performance optimization. In this follow-up post, we outline simple strategies to optimize for cost, performance, and simplicity when you architect data pipelines. We’ve encapsulated these strategies in this acronym-framework:
- Platform Components
- Patterns & Best Practices
From 51ac90fb4efd553fd550ca7a6529cd618c5b49fb Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 17:08:53 -0800
Subject: [PATCH 26/59] Update website/docs/guides/debug-schema-names.md
Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com>
---
website/docs/guides/debug-schema-names.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/guides/debug-schema-names.md b/website/docs/guides/debug-schema-names.md
index de713e07df7..e600c772284 100644
--- a/website/docs/guides/debug-schema-names.md
+++ b/website/docs/guides/debug-schema-names.md
@@ -1,7 +1,7 @@
---
title: Debug schema names
id: debug-schema-names
-description: Learn how to debug schema names when models build under unexpected schemas
+description: Learn how to debug schema names when models build under unexpected schemas.
displayText: Debug schema names
hoverSnippet: Learn how to debug schema names in dbt.
# time_to_complete: '30 minutes' commenting out until we test
From 5358095e9ba253406a527fccdc310881ed4364e5 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 17:09:04 -0800
Subject: [PATCH 27/59] Update website/docs/guides/debug-schema-names.md
Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com>
---
website/docs/guides/debug-schema-names.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/guides/debug-schema-names.md b/website/docs/guides/debug-schema-names.md
index e600c772284..8f8c6d3580f 100644
--- a/website/docs/guides/debug-schema-names.md
+++ b/website/docs/guides/debug-schema-names.md
@@ -73,7 +73,7 @@ Your project is switching out the `generate_schema_name` macro for another macro
{%- endmacro %}
```
-### I have a `generate_schema_name` macro with custom logic
+### You have a `generate_schema_name` macro with custom logic
If this is the case — it might be a great idea to reach out to the person who added this macro to your project, as they will have context here — you can use [GitHub's blame feature](https://docs.github.com/en/free-pro-team@latest/github/managing-files-in-a-repository/tracking-changes-in-a-file) to do this.
From 2f3699084006913ffd5b9d7bc9e80385580e0913 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 17:09:16 -0800
Subject: [PATCH 28/59] Update website/docs/guides/debug-schema-names.md
Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com>
---
website/docs/guides/debug-schema-names.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/guides/debug-schema-names.md b/website/docs/guides/debug-schema-names.md
index 8f8c6d3580f..01c87a0931c 100644
--- a/website/docs/guides/debug-schema-names.md
+++ b/website/docs/guides/debug-schema-names.md
@@ -27,7 +27,7 @@ You can also follow along via this video:
## Search for a macro named `generate_schema_name`
Do a file search to check if you have a macro named `generate_schema_name` in the `macros` directory of your project.
-### I do not have a macro named `generate_schema_name` in my project
+### You do not have a macro named `generate_schema_name` in my project
This means that you are using dbt's default implementation of the macro, as defined [here](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/get_custom_name/get_custom_schema.sql#L47C1-L60)
```sql
From a70225dd7bf9b71a388159d44ade1949a8bf56ac Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 17:09:22 -0800
Subject: [PATCH 29/59] Update website/docs/guides/serverless-datadog.md
Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com>
---
website/docs/guides/serverless-datadog.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/guides/serverless-datadog.md b/website/docs/guides/serverless-datadog.md
index 3b1a3bd6db4..2b8d8341e28 100644
--- a/website/docs/guides/serverless-datadog.md
+++ b/website/docs/guides/serverless-datadog.md
@@ -1,7 +1,7 @@
---
title: "Create Datadog events from dbt Cloud results"
id: serverless-datadog
-description: Configure a serverless app to add dbt Cloud events to Datadog logs
+description: Configure a serverless app to add dbt Cloud events to Datadog logs.
hoverSnippet: Learn how to configure a serverless app to add dbt Cloud events to Datadog logs.
# time_to_complete: '30 minutes' commenting out until we test
icon: 'guides'
From 0129c68a26ee45a27f796f5aa21e039a308be251 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 17:09:34 -0800
Subject: [PATCH 30/59] Update website/docs/guides/debug-schema-names.md
Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com>
---
website/docs/guides/debug-schema-names.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/guides/debug-schema-names.md b/website/docs/guides/debug-schema-names.md
index 01c87a0931c..171e1544a19 100644
--- a/website/docs/guides/debug-schema-names.md
+++ b/website/docs/guides/debug-schema-names.md
@@ -49,7 +49,7 @@ This means that you are using dbt's default implementation of the macro, as defi
Note that this logic is designed so that two dbt users won't accidentally overwrite each other's work by writing to the same schema.
-### I have a `generate_schema_name` macro in my project that calls another macro
+### You have a `generate_schema_name` macro in my project that calls another macro
If your `generate_schema_name` macro looks like so:
```sql
{% macro generate_schema_name(custom_schema_name, node) -%}
From 40aabfa144067ccdfffd25dddd85e5b9de424d61 Mon Sep 17 00:00:00 2001
From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com>
Date: Thu, 9 Nov 2023 17:12:36 -0800
Subject: [PATCH 31/59] Apply suggestions from code review
Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com>
---
website/docs/guides/migrate-from-spark-to-databricks.md | 2 +-
.../docs/guides/productionize-your-dbt-databricks-project.md | 2 +-
website/docs/guides/serverless-datadog.md | 2 +-
website/docs/guides/serverless-pagerduty.md | 2 +-
website/docs/guides/set-up-your-databricks-dbt-project.md | 2 +-
website/docs/guides/zapier-ms-teams.md | 2 +-
website/docs/guides/zapier-refresh-mode-report.md | 2 +-
website/docs/guides/zapier-refresh-tableau-workbook.md | 2 +-
website/docs/guides/zapier-slack.md | 2 +-
9 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/website/docs/guides/migrate-from-spark-to-databricks.md b/website/docs/guides/migrate-from-spark-to-databricks.md
index b249021ed50..8fb02ae79d7 100644
--- a/website/docs/guides/migrate-from-spark-to-databricks.md
+++ b/website/docs/guides/migrate-from-spark-to-databricks.md
@@ -18,7 +18,7 @@ You can migrate your projects from using the `dbt-spark` adapter to using the [d
### Prerequisites
-- Your project must be compatible with dbt 1.0 or greater. Refer to [Upgrading to v1.0](/docs/dbt-versions/core-upgrade/upgrading-to-v1.0) for details. For the latest version of dbt, refer to [Upgrading to v1.3](/docs/dbt-versions/core-upgrade/upgrading-to-v1.3).
+- Your project must be compatible with dbt 1.0 or greater. Refer to [Upgrading to v1.0](/docs/dbt-versions/core-upgrade/upgrading-to-v1.0) for details. For the latest version of dbt, refer to [Upgrading to v1.7](/docs/dbt-versions/core-upgrade/upgrading-to-v1.7).
- For dbt Cloud, you need administrative (admin) privileges to migrate dbt projects.
### Simpler authentication
diff --git a/website/docs/guides/productionize-your-dbt-databricks-project.md b/website/docs/guides/productionize-your-dbt-databricks-project.md
index 2c6a436a15b..b95d8ffd2dd 100644
--- a/website/docs/guides/productionize-your-dbt-databricks-project.md
+++ b/website/docs/guides/productionize-your-dbt-databricks-project.md
@@ -1,7 +1,7 @@
---
title: Productionize your dbt Databricks project
id: productionize-your-dbt-databricks-project
-description: "Learn how to deliver models to end users and use best practices to maintain production data"
+description: "Learn how to deliver models to end users and use best practices to maintain production data."
displayText: Productionize your dbt Databricks project
hoverSnippet: Learn how to Productionize your dbt Databricks project.
# time_to_complete: '30 minutes' commenting out until we test
diff --git a/website/docs/guides/serverless-datadog.md b/website/docs/guides/serverless-datadog.md
index 2b8d8341e28..931ba9832ab 100644
--- a/website/docs/guides/serverless-datadog.md
+++ b/website/docs/guides/serverless-datadog.md
@@ -98,7 +98,7 @@ Wrote config file fly.toml
## Configure a new webhook in dbt Cloud
1. See [Create a webhook subscription](/docs/deploy/webhooks#create-a-webhook-subscription) for full instructions. Your event should be **Run completed**.
-2. Set the webhook URL to the host name you created earlier (`APP_NAME.fly.dev`)
+2. Set the webhook URL to the host name you created earlier (`APP_NAME.fly.dev`).
3. Make note of the Webhook Secret Key for later.
*Do not test the endpoint*; it won't work until you have stored the auth keys (next step)
diff --git a/website/docs/guides/serverless-pagerduty.md b/website/docs/guides/serverless-pagerduty.md
index 31436221be5..50cc1b2b36e 100644
--- a/website/docs/guides/serverless-pagerduty.md
+++ b/website/docs/guides/serverless-pagerduty.md
@@ -1,7 +1,7 @@
---
title: "Trigger PagerDuty alarms when dbt Cloud jobs fail"
id: serverless-pagerduty
-description: Use webhooks to configure a serverless app to trigger PagerDuty alarms
+description: Use webhooks to configure a serverless app to trigger PagerDuty alarms.
hoverSnippet: Learn how to configure a serverless app that uses webhooks to trigger PagerDuty alarms.
# time_to_complete: '30 minutes' commenting out until we test
icon: 'guides'
diff --git a/website/docs/guides/set-up-your-databricks-dbt-project.md b/website/docs/guides/set-up-your-databricks-dbt-project.md
index c47895f7246..c17c6a1f99e 100644
--- a/website/docs/guides/set-up-your-databricks-dbt-project.md
+++ b/website/docs/guides/set-up-your-databricks-dbt-project.md
@@ -1,7 +1,7 @@
---
title: Set up your dbt project with Databricks
id: set-up-your-databricks-dbt-project
-description: "Learn more about setting up your dbt project with Databricks"
+description: "Learn more about setting up your dbt project with Databricks."
displayText: Setting up your dbt project with Databricks
hoverSnippet: Learn how to set up your dbt project with Databricks.
# time_to_complete: '30 minutes' commenting out until we test
diff --git a/website/docs/guides/zapier-ms-teams.md b/website/docs/guides/zapier-ms-teams.md
index bd8bdd4aca2..66596d590e0 100644
--- a/website/docs/guides/zapier-ms-teams.md
+++ b/website/docs/guides/zapier-ms-teams.md
@@ -1,7 +1,7 @@
---
title: "Post to Microsoft Teams when a job finishes"
id: zapier-ms-teams
-description: Use Zapier and dbt Cloud webhooks to post to Microsoft Teams when a job finishes running
+description: Use Zapier and dbt Cloud webhooks to post to Microsoft Teams when a job finishes running.
hoverSnippet: Learn how to use Zapier with dbt Cloud webhooks to post in Microsoft Teams when a job finishes running.
# time_to_complete: '30 minutes' commenting out until we test
icon: 'guides'
diff --git a/website/docs/guides/zapier-refresh-mode-report.md b/website/docs/guides/zapier-refresh-mode-report.md
index 0ffcec9c96d..5bab165b11d 100644
--- a/website/docs/guides/zapier-refresh-mode-report.md
+++ b/website/docs/guides/zapier-refresh-mode-report.md
@@ -1,7 +1,7 @@
---
title: "Refresh a Mode dashboard when a job completes"
id: zapier-refresh-mode-report
-description: Use Zapier to trigger a Mode dashboard refresh when a dbt Cloud job completes
+description: Use Zapier to trigger a Mode dashboard refresh when a dbt Cloud job completes.
hoverSnippet: Learn how to use Zapier to trigger a Mode dashboard refresh when a dbt Cloud job completes.
# time_to_complete: '30 minutes' commenting out until we test
icon: 'guides'
diff --git a/website/docs/guides/zapier-refresh-tableau-workbook.md b/website/docs/guides/zapier-refresh-tableau-workbook.md
index 6e8621659f0..f614b64eaa2 100644
--- a/website/docs/guides/zapier-refresh-tableau-workbook.md
+++ b/website/docs/guides/zapier-refresh-tableau-workbook.md
@@ -1,7 +1,7 @@
---
title: "Refresh Tableau workbook with extracts after a job finishes"
id: zapier-refresh-tableau-workbook
-description: Use Zapier to trigger a Tableau workbook refresh once a dbt Cloud job completes successfully
+description: Use Zapier to trigger a Tableau workbook refresh once a dbt Cloud job completes successfully.
hoverSnippet: Learn how to use Zapier to trigger a Tableau workbook refresh once a dbt Cloud job completes successfully.
# time_to_complete: '30 minutes' commenting out until we test
icon: 'guides'
diff --git a/website/docs/guides/zapier-slack.md b/website/docs/guides/zapier-slack.md
index d103e4aa541..61b96658f95 100644
--- a/website/docs/guides/zapier-slack.md
+++ b/website/docs/guides/zapier-slack.md
@@ -1,7 +1,7 @@
---
title: "Post to Slack with error context when a job fails"
id: zapier-slack
-description: Use a webhook or Slack message to trigger Zapier and post error context in Slack when a job fails
+description: Use a webhook or Slack message to trigger Zapier and post error context in Slack when a job fails.
hoverSnippet: Learn how to use a webhook or Slack message to trigger Zapier to post error context in Slack when a job fails.
# time_to_complete: '30 minutes' commenting out until we test
icon: 'guides'
From 875eb0c59a2c3cba437305b84f0b783c1d8d97b9 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:21:00 -0800
Subject: [PATCH 32/59] Update
website/docs/best-practices/best-practice-workflows.md
don't need the domain for links when page is within our own docs site
---
website/docs/best-practices/best-practice-workflows.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/best-practices/best-practice-workflows.md b/website/docs/best-practices/best-practice-workflows.md
index 4760aeff782..f06e785c6db 100644
--- a/website/docs/best-practices/best-practice-workflows.md
+++ b/website/docs/best-practices/best-practice-workflows.md
@@ -58,7 +58,7 @@ All subsequent data models should be built on top of these models, reducing the
Earlier versions of this documentation recommended implementing “base models” as the first layer of transformation, and gave advice on the SQL within these models. We realized that while the reasons behind this convention were valid, the specific advice around "base models" represented an opinion, so we moved it out of the official documentation.
-You can instead find our opinions on [how we structure our dbt projects](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview).
+You can instead find our opinions on [how we structure our dbt projects](/best-practices/how-we-structure/1-guide-overview).
:::
From 164a80cc2d8baf62365c1c4ba7394f9de3b9b392 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:27:52 -0800
Subject: [PATCH 33/59] Update
website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
don't need the domain for links when page is within our own docs site
---
website/blog/2023-04-24-framework-refactor-alteryx-dbt.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md b/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
index 9b6135b0984..0049a16ff39 100644
--- a/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
+++ b/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
@@ -94,7 +94,7 @@ It is essential to click on each data source (the green book icons on the leftmo
For this step, we identified which operators were used in the data source (for example, joining data, order columns, group by, etc). Usually the Alteryx operators are pretty self-explanatory and all the information needed for understanding appears on the left side of the menu. We also checked the documentation to understand how each Alteryx operator works behind the scenes.
-We followed dbt Labs' guide on how to refactor legacy SQL queries in dbt and some [best practices](https://docs.getdbt.com/guides/refactoring-legacy-sql). After we finished refactoring all the Alteryx workflows, we checked if the Alteryx output matched the output of the refactored model built on dbt.
+We followed dbt Labs' guide on how to refactor legacy SQL queries in dbt and some [best practices](/guides/refactoring-legacy-sql). After we finished refactoring all the Alteryx workflows, we checked if the Alteryx output matched the output of the refactored model built on dbt.
#### Step 3: Use the `audit_helper` package to audit refactored data models
From 96c344f7dbbd148451cded980e97d270146ddcdd Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:29:41 -0800
Subject: [PATCH 34/59] Update
website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
don't need the domain for links when page is within our own docs site
---
website/blog/2023-04-24-framework-refactor-alteryx-dbt.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md b/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
index 0049a16ff39..46cfcb58cdd 100644
--- a/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
+++ b/website/blog/2023-04-24-framework-refactor-alteryx-dbt.md
@@ -131,4 +131,4 @@ As we can see, refactoring Alteryx to dbt was an important step in the direction
>
> [Audit_helper in dbt: Bringing data auditing to a higher level](https://docs.getdbt.com/blog/audit-helper-for-migration)
>
-> [Refactoring legacy SQL to dbt](https://docs.getdbt.com/guides/refactoring-legacy-sql)
+> [Refactoring legacy SQL to dbt](/guides/refactoring-legacy-sql)
From ebbf184c3c582b3a814a411dca2c88e7c02705d6 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:32:42 -0800
Subject: [PATCH 35/59] Update
website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md
don't need the domain for links when page is within our own docs site
---
.../best-practices/how-we-structure/5-semantic-layer-marts.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md b/website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md
index aca0ca3f283..d064490354c 100644
--- a/website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md
+++ b/website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md
@@ -3,7 +3,7 @@ title: "Marts for the Semantic Layer"
id: "5-semantic-layer-marts"
---
-The Semantic Layer alters some fundamental principles of how you organize your project. Using dbt without the Semantic Layer necessitates creating the most useful combinations of your building block components into wide, denormalized marts. On the other hand, the Semantic Layer leverages MetricFlow to denormalize every possible combination of components we've encoded dynamically. As such we're better served to bring more normalized models through from the logical layer into the Semantic Layer to maximize flexibility. This section will assume familiarity with the best practices laid out in the [How we build our metrics](https://docs.getdbt.com/best-practices/how-we-build-our-metrics/semantic-layer-1-intro) guide, so check that out first for a more hands-on introduction to the Semantic Layer.
+The Semantic Layer alters some fundamental principles of how you organize your project. Using dbt without the Semantic Layer necessitates creating the most useful combinations of your building block components into wide, denormalized marts. On the other hand, the Semantic Layer leverages MetricFlow to denormalize every possible combination of components we've encoded dynamically. As such we're better served to bring more normalized models through from the logical layer into the Semantic Layer to maximize flexibility. This section will assume familiarity with the best practices laid out in the [How we build our metrics](/best-practices/how-we-build-our-metrics/semantic-layer-1-intro) guide, so check that out first for a more hands-on introduction to the Semantic Layer.
## Semantic Layer: Files and folders
From c0e825303403b7c94d1b31282a18bcaf99acf6fb Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:33:35 -0800
Subject: [PATCH 36/59] Update
website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md
don't need the domain for links when page is within our own docs site
---
.../best-practices/how-we-structure/5-semantic-layer-marts.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md b/website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md
index d064490354c..62e07a72e36 100644
--- a/website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md
+++ b/website/docs/best-practices/how-we-structure/5-semantic-layer-marts.md
@@ -39,7 +39,7 @@ models
## When to make a mart
- ❓ If we can go directly to staging models and it's better to serve normalized models to the Semantic Layer, then when, where, and why would we make a mart?
- - 🕰️ We have models that have measures but no time dimension to aggregate against. The details of this are laid out in the [Semantic Layer guide](https://docs.getdbt.com/best-practices/how-we-build-our-metrics/semantic-layer-1-intro) but in short, we need a time dimension to aggregate against in MetricFlow. Dimensional tables that
+ - 🕰️ We have models that have measures but no time dimension to aggregate against. The details of this are laid out in the [Semantic Layer guide](/best-practices/how-we-build-our-metrics/semantic-layer-1-intro) but in short, we need a time dimension to aggregate against in MetricFlow. Dimensional tables that
- 🧱 We want to **materialize** our model in various ways.
- 👯 We want to **version** our model.
- 🛒 We have various related models that make more sense as **one wider component**.
From 9f2b122fb0bbf492c50aa47d81fb54ebafd3bac2 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:35:02 -0800
Subject: [PATCH 37/59] Update
website/docs/best-practices/how-we-style/6-how-we-style-conclusion.md
don't need the domain for links when page is within our own docs site
---
.../best-practices/how-we-style/6-how-we-style-conclusion.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/best-practices/how-we-style/6-how-we-style-conclusion.md b/website/docs/best-practices/how-we-style/6-how-we-style-conclusion.md
index 75551f095d3..24103861b97 100644
--- a/website/docs/best-practices/how-we-style/6-how-we-style-conclusion.md
+++ b/website/docs/best-practices/how-we-style/6-how-we-style-conclusion.md
@@ -31,7 +31,7 @@ Our models (typically) fit into two main categories:\
Things to note:
- There are different types of models that typically exist in each of the above categories. See [Model Layers](#model-layers) for more information.
-- Read [How we structure our dbt projects](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview) for an example and more details around organization.
+- Read [How we structure our dbt projects](/best-practices/how-we-structure/1-guide-overview) for an example and more details around organization.
## Model Layers
From 09a859378a937ece3af59d87ec32b8b57c57bdc7 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:45:55 -0800
Subject: [PATCH 38/59] Update
website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-tips.md
fix link
---
website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-tips.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-tips.md b/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-tips.md
index b90ac1bce01..0ceb4929530 100644
--- a/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-tips.md
+++ b/website/docs/docs/cloud/dbt-cloud-ide/dbt-cloud-tips.md
@@ -46,7 +46,7 @@ There are default keyboard shortcuts that can help make development more product
- Use [severity](/reference/resource-configs/severity) thresholds to set an acceptable number of failures for a test.
- Use [incremental_strategy](/docs/build/incremental-models#about-incremental_strategy) in your incremental model config to implement the most effective behavior depending on the volume of your data and reliability of your unique keys.
- Set `vars` in your `dbt_project.yml` to define global defaults for certain conditions, which you can then override using the `--vars` flag in your commands.
-- Use [for loops](/guides/using-jinja#use-a-for-loop-in-models-for-repeated-sql) in Jinja to [DRY](https://docs.getdbt.com/terms/dry) up repetitive logic, such as selecting a series of columns that all require the same transformations and naming patterns to be applied.
+- Use [for loops](/guides/using-jinja?step=3) in Jinja to DRY up repetitive logic, such as selecting a series of columns that all require the same transformations and naming patterns to be applied.
- Instead of relying on post-hooks, use the [grants config](/reference/resource-configs/grants) to apply permission grants in the warehouse resiliently.
- Define [source-freshness](/docs/build/sources#snapshotting-source-data-freshness) thresholds on your sources to avoid running transformations on data that has already been processed.
- Use the `+` operator on the left of a model `dbt build --select +model_name` to run a model and all of its upstream dependencies. Use the `+` operator on the right of the model `dbt build --select model_name+` to run a model and everything downstream that depends on it.
From 17bc23ada5ef5e65c941a2b8eb8c0bea63cc6059 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:54:50 -0800
Subject: [PATCH 39/59] Update
website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
---
.../24-Nov-2022/dbt-databricks-unity-catalog-support.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md b/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
index ee46cb5f558..ce702434cf3 100644
--- a/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
+++ b/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
@@ -8,6 +8,6 @@ tags: [Nov-2022, v1.1.66.15]
dbt Cloud is the easiest and most reliable way to develop and deploy a dbt project. It helps remove complexity while also giving you more features and better performance. A simpler Databricks connection experience with support for Databricks’ Unity Catalog and better modeling defaults is now available for your use.
-For all the Databricks customers already using dbt Cloud with the dbt-spark adapter, you can now [migrate](/guides/migrate-from-spark-to-databricks) your connection to the [dbt-databricks adapter](https://docs.getdbt.com/reference/warehouse-setups/databricks-setup) to get the benefits. [Databricks](https://www.databricks.com/blog/2022/11/17/introducing-native-high-performance-integration-dbt-cloud.html) is committed to maintaining and improving the adapter, so this integrated experience will continue to provide the best of dbt and Databricks.
+For all the Databricks customers already using dbt Cloud with the dbt-spark adapter, you can now [migrate](/guides/migrate-from-spark-to-databricks) your connection to the [dbt-databricks adapter](/reference/warehouse-setups/databricks-setup) to get the benefits. [Databricks](https://www.databricks.com/blog/2022/11/17/introducing-native-high-performance-integration-dbt-cloud.html) is committed to maintaining and improving the adapter, so this integrated experience will continue to provide the best of dbt and Databricks.
Check out our [live blog post](https://www.getdbt.com/blog/dbt-cloud-databricks-experience/) to learn more.
From 830da91f8c4ea602ef16085331c46589425e3f27 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:56:27 -0800
Subject: [PATCH 40/59] Update website/docs/docs/deploy/deploy-environments.md
---
website/docs/docs/deploy/deploy-environments.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/deploy/deploy-environments.md b/website/docs/docs/deploy/deploy-environments.md
index 8f3353d07d1..237626dffc9 100644
--- a/website/docs/docs/deploy/deploy-environments.md
+++ b/website/docs/docs/deploy/deploy-environments.md
@@ -13,7 +13,7 @@ Deployment environments in dbt Cloud are crucial for deploying dbt jobs in produ
A dbt Cloud project can have multiple deployment environments, providing you the flexibility and customization to tailor the execution of dbt jobs. You can use deployment environments to [create and schedule jobs](/docs/deploy/deploy-jobs#create-and-schedule-jobs), [enable continuous integration](/docs/deploy/continuous-integration), or more based on your specific needs or requirements.
:::tip Learn how to manage dbt Cloud environments
-To learn different approaches to managing dbt Cloud environments and recommendations for your organization's unique needs, read [dbt Cloud environment best practices](https://docs.getdbt.com/best-practices/environment-setup/1-env-guide-overview).
+To learn different approaches to managing dbt Cloud environments and recommendations for your organization's unique needs, read [dbt Cloud environment best practices](/best-practices/environment-setup/1-env-guide-overview).
:::
This page reviews the different types of environments and how to configure your deployment environment in dbt Cloud.
From 3d3da06e91ebebd4ff9e2bdbafd0df7f6498f290 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:57:41 -0800
Subject: [PATCH 41/59] Update website/docs/docs/deploy/deploy-environments.md
---
website/docs/docs/deploy/deploy-environments.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/deploy/deploy-environments.md b/website/docs/docs/deploy/deploy-environments.md
index 237626dffc9..21308784434 100644
--- a/website/docs/docs/deploy/deploy-environments.md
+++ b/website/docs/docs/deploy/deploy-environments.md
@@ -186,7 +186,7 @@ This section allows you to determine the credentials that should be used when co
## Related docs
-- [dbt Cloud environment best practices](https://docs.getdbt.com/best-practices/environment-setup/1-env-guide-overview)
+- [dbt Cloud environment best practices](/best-practices/environment-setup/1-env-guide-overview)
- [Deploy jobs](/docs/deploy/deploy-jobs)
- [CI jobs](/docs/deploy/continuous-integration)
- [Delete a job or environment in dbt Cloud](/faqs/Environments/delete-environment-job)
From 4d0ba52ad55c91e0fc8e6e465444eb8f6c5da5ba Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 14:59:36 -0800
Subject: [PATCH 42/59] Update website/docs/docs/environments-in-dbt.md
---
website/docs/docs/environments-in-dbt.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/environments-in-dbt.md b/website/docs/docs/environments-in-dbt.md
index 0361a272c4f..ab899b09516 100644
--- a/website/docs/docs/environments-in-dbt.md
+++ b/website/docs/docs/environments-in-dbt.md
@@ -33,7 +33,7 @@ Configure environments to tell dbt Cloud or dbt Core how to build and execute yo
## Related docs
-- [dbt Cloud environment best practices](https://docs.getdbt.com/best-practices/environment-setup/1-env-guide-overview)
+- [dbt Cloud environment best practices](/best-practices/environment-setup/1-env-guide-overview)
- [Deployment environments](/docs/deploy/deploy-environments)
- [About dbt Core versions](/docs/dbt-versions/core)
- [Set Environment variables in dbt Cloud](/docs/build/environment-variables#special-environment-variables)
From 29e0de831e8f07cfbb187151af52c894bce25be1 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:01:57 -0800
Subject: [PATCH 43/59] Update
website/docs/faqs/Project/multiple-resource-yml-files.md
---
website/docs/faqs/Project/multiple-resource-yml-files.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/faqs/Project/multiple-resource-yml-files.md b/website/docs/faqs/Project/multiple-resource-yml-files.md
index a60c198de5d..04e1702a162 100644
--- a/website/docs/faqs/Project/multiple-resource-yml-files.md
+++ b/website/docs/faqs/Project/multiple-resource-yml-files.md
@@ -9,4 +9,4 @@ It's up to you:
- Some folks find it useful to have one file per model (or source / snapshot / seed etc)
- Some find it useful to have one per directory, documenting and testing multiple models in one file
-Choose what works for your team. We have more recommendations in our guide on [structuring dbt projects](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview).
+Choose what works for your team. We have more recommendations in our guide on [structuring dbt projects](/best-practices/how-we-structure/1-guide-overview).
From 48ad5e25e5665cd81a47a2ab5934d1e106667b64 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:02:47 -0800
Subject: [PATCH 44/59] Update website/docs/faqs/Project/resource-yml-name.md
---
website/docs/faqs/Project/resource-yml-name.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/faqs/Project/resource-yml-name.md b/website/docs/faqs/Project/resource-yml-name.md
index 78d541cbd54..c26cff26474 100644
--- a/website/docs/faqs/Project/resource-yml-name.md
+++ b/website/docs/faqs/Project/resource-yml-name.md
@@ -10,4 +10,4 @@ It's up to you! Here's a few options:
- Use the same name as your directory (assuming you're using sensible names for your directories)
- If you test and document one model (or seed, snapshot, macro etc.) per file, you can give it the same name as the model (or seed, snapshot, macro etc.)
-Choose what works for your team. We have more recommendations in our guide on [structuring dbt projects](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview).
+Choose what works for your team. We have more recommendations in our guide on [structuring dbt projects](/best-practices/how-we-structure/1-guide-overview).
From 6d4959f9242d3036d0be114ab4a2d2b1ac2ae3e8 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:03:44 -0800
Subject: [PATCH 45/59] Update website/docs/faqs/Project/structure-a-project.md
---
website/docs/faqs/Project/structure-a-project.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/faqs/Project/structure-a-project.md b/website/docs/faqs/Project/structure-a-project.md
index 136c5b188bf..a9ef53f5c8f 100644
--- a/website/docs/faqs/Project/structure-a-project.md
+++ b/website/docs/faqs/Project/structure-a-project.md
@@ -8,4 +8,4 @@ id: structure-a-project
There's no one best way to structure a project! Every organization is unique.
-If you're just getting started, check out how we (dbt Labs) [structure our dbt projects](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview).
+If you're just getting started, check out how we (dbt Labs) [structure our dbt projects](/best-practices/how-we-structure/1-guide-overview).
From 3b4e603ce5b1a7a578558803fde2b543f17f7e26 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:08:44 -0800
Subject: [PATCH 46/59] Update website/docs/guides/debug-schema-names.md
---
website/docs/guides/debug-schema-names.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/guides/debug-schema-names.md b/website/docs/guides/debug-schema-names.md
index 171e1544a19..795128d83ff 100644
--- a/website/docs/guides/debug-schema-names.md
+++ b/website/docs/guides/debug-schema-names.md
@@ -27,7 +27,7 @@ You can also follow along via this video:
## Search for a macro named `generate_schema_name`
Do a file search to check if you have a macro named `generate_schema_name` in the `macros` directory of your project.
-### You do not have a macro named `generate_schema_name` in my project
+### You do not have a macro named `generate_schema_name` in your project
This means that you are using dbt's default implementation of the macro, as defined [here](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/get_custom_name/get_custom_schema.sql#L47C1-L60)
```sql
From c77fe1ebf695fe8c2e6edc25c984dc649e0c0fa0 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:10:26 -0800
Subject: [PATCH 47/59] Update website/docs/guides/debug-schema-names.md
---
website/docs/guides/debug-schema-names.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/guides/debug-schema-names.md b/website/docs/guides/debug-schema-names.md
index 795128d83ff..c7bf1a195b1 100644
--- a/website/docs/guides/debug-schema-names.md
+++ b/website/docs/guides/debug-schema-names.md
@@ -49,7 +49,7 @@ This means that you are using dbt's default implementation of the macro, as defi
Note that this logic is designed so that two dbt users won't accidentally overwrite each other's work by writing to the same schema.
-### You have a `generate_schema_name` macro in my project that calls another macro
+### You have a `generate_schema_name` macro in a project that calls another macro
If your `generate_schema_name` macro looks like so:
```sql
{% macro generate_schema_name(custom_schema_name, node) -%}
From 5013b67b56b9e5be1ff0b5c27cd3c149d785a73e Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:17:22 -0800
Subject: [PATCH 48/59] Update
website/docs/sql-reference/aggregate-functions/sql-array-agg.md
---
website/docs/sql-reference/aggregate-functions/sql-array-agg.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/sql-reference/aggregate-functions/sql-array-agg.md b/website/docs/sql-reference/aggregate-functions/sql-array-agg.md
index 9f4af7ca1fc..a6f508a7bef 100644
--- a/website/docs/sql-reference/aggregate-functions/sql-array-agg.md
+++ b/website/docs/sql-reference/aggregate-functions/sql-array-agg.md
@@ -59,4 +59,4 @@ Looking at the query results—this makes sense! We’d expect newer orders to l
There are definitely too many use cases to list out for using the ARRAY_AGG function in your dbt models, but it’s very likely that ARRAY_AGG is used pretty downstream in your since you likely don’t want your data so bundled up earlier in your DAG to improve modularity and dryness. A few downstream use cases for ARRAY_AGG:
- In [`export_` models](https://www.getdbt.com/open-source-data-culture/reverse-etl-playbook) that are used to send data to platforms using a tool to pair down multiple rows into a single row. Some downstream platforms, for example, require certain values that we’d usually keep as separate rows to be one singular row per customer or user. ARRAY_AGG is handy to bring multiple column values together by a singular id, such as creating an array of all items a user has ever purchased and sending that array downstream to an email platform to create a custom email campaign.
-- Similar to export models, you may see ARRAY_AGG used in [mart tables](https://docs.getdbt.com/best-practices/how-we-structure/4-marts) to create final aggregate arrays per a singular dimension; performance concerns of ARRAY_AGG in these likely larger tables can potentially be bypassed with use of [incremental models in dbt](https://docs.getdbt.com/docs/build/incremental-models).
+- Similar to export models, you may see ARRAY_AGG used in [mart tables](/best-practices/how-we-structure/4-marts) to create final aggregate arrays per a singular dimension; performance concerns of ARRAY_AGG in these likely larger tables can potentially be bypassed with use of [incremental models in dbt](/docs/build/incremental-models).
From b49f8c40e962646199d1ab254104bac8af790243 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:18:24 -0800
Subject: [PATCH 49/59] Update
website/docs/sql-reference/aggregate-functions/sql-avg.md
---
website/docs/sql-reference/aggregate-functions/sql-avg.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/sql-reference/aggregate-functions/sql-avg.md b/website/docs/sql-reference/aggregate-functions/sql-avg.md
index afb766f12e2..d1dba119292 100644
--- a/website/docs/sql-reference/aggregate-functions/sql-avg.md
+++ b/website/docs/sql-reference/aggregate-functions/sql-avg.md
@@ -48,7 +48,7 @@ Snowflake, Databricks, Google BigQuery, and Amazon Redshift all support the abil
## AVG function use cases
We most commonly see the AVG function used in data work to calculate:
-- The average of key metrics (ex. Average CSAT, average lead time, average order amount) in downstream [fact or dim models](https://docs.getdbt.com/best-practices/how-we-structure/4-marts)
+- The average of key metrics (ex. Average CSAT, average lead time, average order amount) in downstream [fact or dim models](/best-practices/how-we-structure/4-marts)
- Rolling or moving averages (ex. 7-day, 30-day averages for key metrics) using window functions
- Averages in [dbt metrics](https://docs.getdbt.com/docs/build/metrics)
From ace443fcd3484b674408c434f59834ea9f483063 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:19:51 -0800
Subject: [PATCH 50/59] Update
website/docs/sql-reference/aggregate-functions/sql-round.md
---
website/docs/sql-reference/aggregate-functions/sql-round.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/sql-reference/aggregate-functions/sql-round.md b/website/docs/sql-reference/aggregate-functions/sql-round.md
index 7652c881789..bc9669e22cb 100644
--- a/website/docs/sql-reference/aggregate-functions/sql-round.md
+++ b/website/docs/sql-reference/aggregate-functions/sql-round.md
@@ -57,7 +57,7 @@ Google BigQuery, Amazon Redshift, Snowflake, and Databricks all support the abil
## ROUND function use cases
-If you find yourself rounding numeric data, either in data models or ad-hoc analyses, you’re probably rounding to improve the readability and usability of your data using downstream [intermediate](https://docs.getdbt.com/best-practices/how-we-structure/3-intermediate) or [mart models](https://docs.getdbt.com/best-practices/how-we-structure/4-marts). Specifically, you’ll likely use the ROUND function to:
+If you find yourself rounding numeric data, either in data models or ad-hoc analyses, you’re probably rounding to improve the readability and usability of your data using downstream [intermediate](/best-practices/how-we-structure/3-intermediate) or [mart models](/best-practices/how-we-structure/4-marts). Specifically, you’ll likely use the ROUND function to:
- Make numeric calculations using division or averages a little cleaner and easier to understand
- Create concrete buckets of data for a cleaner distribution of values during ad-hoc analysis
From f16389426375a6f97782a75d0199bba89a3e3d4d Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:21:07 -0800
Subject: [PATCH 51/59] Update website/docs/sql-reference/clauses/sql-limit.md
---
website/docs/sql-reference/clauses/sql-limit.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/sql-reference/clauses/sql-limit.md b/website/docs/sql-reference/clauses/sql-limit.md
index a2c49866592..a02b851e37d 100644
--- a/website/docs/sql-reference/clauses/sql-limit.md
+++ b/website/docs/sql-reference/clauses/sql-limit.md
@@ -51,7 +51,7 @@ This simple query using the [Jaffle Shop’s](https://github.com/dbt-labs/jaffle
After ensuring that this is the result you want from this query, you can omit the LIMIT in your final data model.
:::tip Save money and time by limiting data in development
-You could limit your data used for development by manually adding a LIMIT statement, a WHERE clause to your query, or by using a [dbt macro to automatically limit data based](https://docs.getdbt.com/best-practices/best-practice-workflows#limit-the-data-processed-when-in-development) on your development environment to help reduce your warehouse usage during dev periods.
+You could limit your data used for development by manually adding a LIMIT statement, a WHERE clause to your query, or by using a [dbt macro to automatically limit data based](/best-practices/best-practice-workflows#limit-the-data-processed-when-in-development) on your development environment to help reduce your warehouse usage during dev periods.
:::
## LIMIT syntax in Snowflake, Databricks, BigQuery, and Redshift
From 4036a140d8d47b1471cd3810a62f275587e7e68c Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:26:02 -0800
Subject: [PATCH 52/59] Update
website/docs/sql-reference/clauses/sql-order-by.md
---
website/docs/sql-reference/clauses/sql-order-by.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/sql-reference/clauses/sql-order-by.md b/website/docs/sql-reference/clauses/sql-order-by.md
index 39337de1e48..d18946d0d16 100644
--- a/website/docs/sql-reference/clauses/sql-order-by.md
+++ b/website/docs/sql-reference/clauses/sql-order-by.md
@@ -57,7 +57,7 @@ Since the ORDER BY clause is a SQL fundamental, data warehouses, including Snowf
## ORDER BY use cases
We most commonly see the ORDER BY clause used in data work to:
-- Analyze data for both initial exploration of raw data sources and ad hoc querying of [mart datasets](https://docs.getdbt.com/best-practices/how-we-structure/4-marts)
+- Analyze data for both initial exploration of raw data sources and ad hoc querying of [mart datasets](/best-practices/how-we-structure/4-marts)
- Identify the top 5/10/50/100 of a dataset when used in pair with a [LIMIT](/sql-reference/limit)
- (For Snowflake) Optimize the performance of large incremental models that use both a `cluster_by` [configuration](https://docs.getdbt.com/reference/resource-configs/snowflake-configs#using-cluster_by) and ORDER BY statement
- Control the ordering of window function partitions (ex. `row_number() over (partition by user_id order by updated_at)`)
From ec705a277db0c95959b503dcc6d9adc59fae470e Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:26:34 -0800
Subject: [PATCH 53/59] Update
website/docs/sql-reference/joins/sql-self-join.md
---
website/docs/sql-reference/joins/sql-self-join.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/sql-reference/joins/sql-self-join.md b/website/docs/sql-reference/joins/sql-self-join.md
index bb4237319f0..6d9a7d3261e 100644
--- a/website/docs/sql-reference/joins/sql-self-join.md
+++ b/website/docs/sql-reference/joins/sql-self-join.md
@@ -66,6 +66,6 @@ This query utilizing a self join adds the `parent_name` of skus that have non-nu
## SQL self join use cases
-Again, self joins are probably rare in your dbt project and will most often be utilized in tables that contain a hierarchical structure, such as consisting of a column which is a foreign key to the primary key of the same table. If you do have use cases for self joins, such as in the example above, you’ll typically want to perform that self join early upstream in your , such as in a [staging](https://docs.getdbt.com/best-practices/how-we-structure/2-staging) or [intermediate](https://docs.getdbt.com/best-practices/how-we-structure/3-intermediate) model; if your raw, unjoined table is going to need to be accessed further downstream sans self join, that self join should happen in a modular intermediate model.
+Again, self joins are probably rare in your dbt project and will most often be utilized in tables that contain a hierarchical structure, such as consisting of a column which is a foreign key to the primary key of the same table. If you do have use cases for self joins, such as in the example above, you’ll typically want to perform that self join early upstream in your , such as in a [staging](/best-practices/how-we-structure/2-staging) or [intermediate](/best-practices/how-we-structure/3-intermediate) model; if your raw, unjoined table is going to need to be accessed further downstream sans self join, that self join should happen in a modular intermediate model.
You can also use self joins to create a cartesian product (aka a cross join) of a table against itself. Again, slim use cases, but still there for you if you need it 😉
From 2a07b7b0294ae3421d29b376fe84026110a99a2e Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:27:29 -0800
Subject: [PATCH 54/59] Update
website/docs/sql-reference/joins/sql-left-join.md
---
website/docs/sql-reference/joins/sql-left-join.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/sql-reference/joins/sql-left-join.md b/website/docs/sql-reference/joins/sql-left-join.md
index 24fbb2bfa0c..914f83bb7e3 100644
--- a/website/docs/sql-reference/joins/sql-left-join.md
+++ b/website/docs/sql-reference/joins/sql-left-join.md
@@ -73,4 +73,4 @@ Left joins are a fundamental in data modeling and analytics engineering work—t
Something to note if you use left joins: if there are multiple records for an individual key in the left join database object, be aware that duplicates can potentially be introduced in the final query result. This is where dbt tests, such as testing for uniqueness and [equal row count](https://github.com/dbt-labs/dbt-utils#equal_rowcount-source) across upstream source tables and downstream child models, can help you identify faulty data modeling logic and improve data quality.
:::
-Where you will not (and should not) see left joins is in [staging models](https://docs.getdbt.com/best-practices/how-we-structure/2-staging) that are used to clean and prep raw source data for analytics uses. Any joins in your dbt projects should happen further downstream in [intermediate](https://docs.getdbt.com/best-practices/how-we-structure/3-intermediate) and [mart models](https://docs.getdbt.com/best-practices/how-we-structure/4-marts) to improve modularity and cleanliness.
+Where you will not (and should not) see left joins is in [staging models](/best-practices/how-we-structure/2-staging) that are used to clean and prep raw source data for analytics uses. Any joins in your dbt projects should happen further downstream in [intermediate](/best-practices/how-we-structure/3-intermediate) and [mart models](/best-practices/how-we-structure/4-marts) to improve modularity and cleanliness.
From 7c065e1d1567be21ad7938589cfb6fbb1d89a8bf Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:28:07 -0800
Subject: [PATCH 55/59] Update
website/docs/sql-reference/joins/sql-inner-join.md
---
website/docs/sql-reference/joins/sql-inner-join.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/sql-reference/joins/sql-inner-join.md b/website/docs/sql-reference/joins/sql-inner-join.md
index e1c2d6151c8..951e3675bc7 100644
--- a/website/docs/sql-reference/joins/sql-inner-join.md
+++ b/website/docs/sql-reference/joins/sql-inner-join.md
@@ -66,5 +66,5 @@ Because there’s no `user_id` = 4 in Table A and no `user_id` = 2 in Table B, r
## SQL inner join use cases
-There are probably countless scenarios where you’d want to inner join multiple tables together—perhaps you have some really nicely structured tables with the exact same primary keys that should really just be one larger, wider table or you’re joining two tables together don’t want any null or missing column values if you used a left or right join—it’s all pretty dependent on your source data and end use cases. Where you will not (and should not) see inner joins is in [staging models](https://docs.getdbt.com/best-practices/how-we-structure/2-staging) that are used to clean and prep raw source data for analytics uses. Any joins in your dbt projects should happen further downstream in [intermediate](https://docs.getdbt.com/best-practices/how-we-structure/3-intermediate) and [mart models](https://docs.getdbt.com/best-practices/how-we-structure/4-marts) to improve modularity and DAG cleanliness.
+There are probably countless scenarios where you’d want to inner join multiple tables together—perhaps you have some really nicely structured tables with the exact same primary keys that should really just be one larger, wider table or you’re joining two tables together don’t want any null or missing column values if you used a left or right join—it’s all pretty dependent on your source data and end use cases. Where you will not (and should not) see inner joins is in [staging models](/best-practices/how-we-structure/2-staging) that are used to clean and prep raw source data for analytics uses. Any joins in your dbt projects should happen further downstream in [intermediate](/best-practices/how-we-structure/3-intermediate) and [mart models](/best-practices/how-we-structure/4-marts) to improve modularity and DAG cleanliness.
From d866035e5cfc05742c827f19401e26ba42964bf1 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:50:08 -0800
Subject: [PATCH 56/59] Update website/docs/guides/custom-cicd-pipelines.md
per style guide, use sentence case for titles
---
website/docs/guides/custom-cicd-pipelines.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/guides/custom-cicd-pipelines.md b/website/docs/guides/custom-cicd-pipelines.md
index bf781204fc5..672c6e6dab8 100644
--- a/website/docs/guides/custom-cicd-pipelines.md
+++ b/website/docs/guides/custom-cicd-pipelines.md
@@ -1,5 +1,5 @@
---
-title: Customizing CI/CD with Custom Pipelines
+title: Customizing CI/CD with custom pipelines
id: custom-cicd-pipelines
description: "Learn the benefits of version-controlled analytics code and custom pipelines in dbt for enhanced code testing and workflow automation during the development process."
displayText: Learn version-controlled code, custom pipelines, and enhanced code testing.
From 0fe005a39997e67dde10184ec3bb7b91377d3f37 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:53:06 -0800
Subject: [PATCH 57/59] Update website/docs/docs/cloud/billing.md
---
website/docs/docs/cloud/billing.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/cloud/billing.md b/website/docs/docs/cloud/billing.md
index f66e2aad363..31b7689ceb9 100644
--- a/website/docs/docs/cloud/billing.md
+++ b/website/docs/docs/cloud/billing.md
@@ -215,7 +215,7 @@ If you want to ensure that you're building views whenever the logic is changed,
Executing `dbt build` in this context is unnecessary because the CI job was used to both run and test the code that just got merged into main.
5. Under the **Execution Settings**, select the default production job to compare changes against:
- **Defer to a previous run state** — Select the “Merge Job” you created so the job compares and identifies what has changed since the last merge.
-6. In your dbt project, follow the steps in Run a dbt Cloud job on merge in the [Customizing CI/CD with Custom Pipelines](/guides/custom-cicd-pipelines) guide to create a script to trigger the dbt Cloud API to run your job after a merge happens within your git repository or watch this [video](https://www.loom.com/share/e7035c61dbed47d2b9b36b5effd5ee78?sid=bcf4dd2e-b249-4e5d-b173-8ca204d9becb).
+6. In your dbt project, follow the steps in Run a dbt Cloud job on merge in the [Customizing CI/CD with custom pipelines](/guides/custom-cicd-pipelines) guide to create a script to trigger the dbt Cloud API to run your job after a merge happens within your git repository or watch this [video](https://www.loom.com/share/e7035c61dbed47d2b9b36b5effd5ee78?sid=bcf4dd2e-b249-4e5d-b173-8ca204d9becb).
The purpose of the merge job is to:
From 7de9d7285fe915a7db1c12e25c7f2de5f506f036 Mon Sep 17 00:00:00 2001
From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com>
Date: Fri, 10 Nov 2023 15:54:03 -0800
Subject: [PATCH 58/59] Update
website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
---
.../dbt-versions/release-notes/07-June-2023/product-docs-jun.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md b/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
index 4ead401a759..db73597cd63 100644
--- a/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
+++ b/website/docs/docs/dbt-versions/release-notes/07-June-2023/product-docs-jun.md
@@ -32,4 +32,4 @@ Here's what's new to [docs.getdbt.com](http://docs.getdbt.com/) in June:
## New 📚 Guides, ✏️ blog posts, and FAQs
-- Add an Azure DevOps example in the [Customizing CI/CD with Custom Pipelines](/guides/custom-cicd-pipelines) guide.
+- Add an Azure DevOps example in the [Customizing CI/CD with custom pipelines](/guides/custom-cicd-pipelines) guide.
From 6d1473150d8105dd70e30fdaff3190c01610794b Mon Sep 17 00:00:00 2001
From: Ly Nguyen
Date: Fri, 10 Nov 2023 16:21:25 -0800
Subject: [PATCH 59/59] Fix old redirects, remove duplicate steps
---
.../dbt-databricks-unity-catalog-support.md | 2 +-
.../docs/docs/deploy/deploy-environments.md | 4 ++--
website/docs/docs/environments-in-dbt.md | 2 +-
website/docs/guides/dbt-python-snowpark.md | 21 -------------------
4 files changed, 4 insertions(+), 25 deletions(-)
diff --git a/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md b/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
index ce702434cf3..012615e1e4e 100644
--- a/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
+++ b/website/docs/docs/dbt-versions/release-notes/24-Nov-2022/dbt-databricks-unity-catalog-support.md
@@ -8,6 +8,6 @@ tags: [Nov-2022, v1.1.66.15]
dbt Cloud is the easiest and most reliable way to develop and deploy a dbt project. It helps remove complexity while also giving you more features and better performance. A simpler Databricks connection experience with support for Databricks’ Unity Catalog and better modeling defaults is now available for your use.
-For all the Databricks customers already using dbt Cloud with the dbt-spark adapter, you can now [migrate](/guides/migrate-from-spark-to-databricks) your connection to the [dbt-databricks adapter](/reference/warehouse-setups/databricks-setup) to get the benefits. [Databricks](https://www.databricks.com/blog/2022/11/17/introducing-native-high-performance-integration-dbt-cloud.html) is committed to maintaining and improving the adapter, so this integrated experience will continue to provide the best of dbt and Databricks.
+For all the Databricks customers already using dbt Cloud with the dbt-spark adapter, you can now [migrate](/guides/migrate-from-spark-to-databricks) your connection to the [dbt-databricks adapter](/docs/core/connect-data-platform/databricks-setup) to get the benefits. [Databricks](https://www.databricks.com/blog/2022/11/17/introducing-native-high-performance-integration-dbt-cloud.html) is committed to maintaining and improving the adapter, so this integrated experience will continue to provide the best of dbt and Databricks.
Check out our [live blog post](https://www.getdbt.com/blog/dbt-cloud-databricks-experience/) to learn more.
diff --git a/website/docs/docs/deploy/deploy-environments.md b/website/docs/docs/deploy/deploy-environments.md
index 21308784434..650fdb1c28a 100644
--- a/website/docs/docs/deploy/deploy-environments.md
+++ b/website/docs/docs/deploy/deploy-environments.md
@@ -13,7 +13,7 @@ Deployment environments in dbt Cloud are crucial for deploying dbt jobs in produ
A dbt Cloud project can have multiple deployment environments, providing you the flexibility and customization to tailor the execution of dbt jobs. You can use deployment environments to [create and schedule jobs](/docs/deploy/deploy-jobs#create-and-schedule-jobs), [enable continuous integration](/docs/deploy/continuous-integration), or more based on your specific needs or requirements.
:::tip Learn how to manage dbt Cloud environments
-To learn different approaches to managing dbt Cloud environments and recommendations for your organization's unique needs, read [dbt Cloud environment best practices](/best-practices/environment-setup/1-env-guide-overview).
+To learn different approaches to managing dbt Cloud environments and recommendations for your organization's unique needs, read [dbt Cloud environment best practices](/guides/set-up-ci).
:::
This page reviews the different types of environments and how to configure your deployment environment in dbt Cloud.
@@ -186,7 +186,7 @@ This section allows you to determine the credentials that should be used when co
## Related docs
-- [dbt Cloud environment best practices](/best-practices/environment-setup/1-env-guide-overview)
+- [dbt Cloud environment best practices](/guides/set-up-ci)
- [Deploy jobs](/docs/deploy/deploy-jobs)
- [CI jobs](/docs/deploy/continuous-integration)
- [Delete a job or environment in dbt Cloud](/faqs/Environments/delete-environment-job)
diff --git a/website/docs/docs/environments-in-dbt.md b/website/docs/docs/environments-in-dbt.md
index ab899b09516..f0691761dd6 100644
--- a/website/docs/docs/environments-in-dbt.md
+++ b/website/docs/docs/environments-in-dbt.md
@@ -33,7 +33,7 @@ Configure environments to tell dbt Cloud or dbt Core how to build and execute yo
## Related docs
-- [dbt Cloud environment best practices](/best-practices/environment-setup/1-env-guide-overview)
+- [dbt Cloud environment best practices](/guides/set-up-ci)
- [Deployment environments](/docs/deploy/deploy-environments)
- [About dbt Core versions](/docs/dbt-versions/core)
- [Set Environment variables in dbt Cloud](/docs/build/environment-variables#special-environment-variables)
diff --git a/website/docs/guides/dbt-python-snowpark.md b/website/docs/guides/dbt-python-snowpark.md
index 35842eb8d91..55e6b68c172 100644
--- a/website/docs/guides/dbt-python-snowpark.md
+++ b/website/docs/guides/dbt-python-snowpark.md
@@ -67,27 +67,6 @@ Overall we are going to set up the environments, build scalable pipelines in dbt
6. Finally, create a new Worksheet by selecting **+ Worksheet** in the upper right corner.
-
-33 1. Log in to your trial Snowflake account. You can [sign up for a Snowflake Trial Account using this form](https://signup.snowflake.com/) if you don’t have one.
-2. Ensure that your account is set up using **AWS** in the **US East (N. Virginia)**. We will be copying the data from a public AWS S3 bucket hosted by dbt Labs in the us-east-1 region. By ensuring our Snowflake environment setup matches our bucket region, we avoid any multi-region data copy and retrieval latency issues.
-
-
-
-3. After creating your account and verifying it from your sign-up email, Snowflake will direct you back to the UI called Snowsight.
-
-4. When Snowsight first opens, your window should look like the following, with you logged in as the ACCOUNTADMIN with demo worksheets open:
-
-
-
-
-5. Navigate to **Admin > Billing & Terms**. Click **Enable > Acknowledge & Continue** to enable Anaconda Python Packages to run in Snowflake.
-
-
-
-
-
-6. Finally, create a new Worksheet by selecting **+ Worksheet** in the upper right corner.
-
## Connect to data source
We need to obtain our data source by copying our Formula 1 data into Snowflake tables from a public S3 bucket that dbt Labs hosts.