From 7cd0cf91b7b8eb13bd256ff44dd9600791117d62 Mon Sep 17 00:00:00 2001 From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com> Date: Fri, 27 Oct 2023 17:43:49 -0700 Subject: [PATCH] fixing best-practice links --- website/blog/2021-02-05-dbt-project-checklist.md | 2 +- ...2-08-12-how-we-shaved-90-minutes-off-long-running-model.md | 2 +- .../docs/guides/migration/versions/08-upgrading-to-v1.0.md | 2 +- website/docs/reference/node-selection/syntax.md | 4 ++-- website/docs/sql-reference/clauses/sql-limit.md | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/website/blog/2021-02-05-dbt-project-checklist.md b/website/blog/2021-02-05-dbt-project-checklist.md index dbe2c10f408..afca49af4b4 100644 --- a/website/blog/2021-02-05-dbt-project-checklist.md +++ b/website/blog/2021-02-05-dbt-project-checklist.md @@ -156,7 +156,7 @@ This post is the checklist I created to guide our internal work, and I’m shari **Useful links** -* [Version control](/guides/legacy/best-practices#version-control-your-dbt-project) +* [Version control](/best-practices/best-practices#version-control-your-dbt-project) * [dbt Labs' PR Template](/blog/analytics-pull-request-template) ## ✅ Documentation diff --git a/website/blog/2022-08-12-how-we-shaved-90-minutes-off-long-running-model.md b/website/blog/2022-08-12-how-we-shaved-90-minutes-off-long-running-model.md index 020a48c763f..4e4b667dc9d 100644 --- a/website/blog/2022-08-12-how-we-shaved-90-minutes-off-long-running-model.md +++ b/website/blog/2022-08-12-how-we-shaved-90-minutes-off-long-running-model.md @@ -286,7 +286,7 @@ Developing an analytic code base is an ever-evolving process. What worked well w 4. **Test on representative data** - Testing on a [subset of data](https://docs.getdbt.com/guides/legacy/best-practices#limit-the-data-processed-when-in-development) is a great general practice. It allows you to iterate quickly, and doesn’t waste resources. However, there are times when you need to test on a larger dataset for problems like disk spillage to come to the fore. Testing on large data is hard and expensive, so make sure you have a good idea of the solution before you commit to this step. + Testing on a [subset of data](https://docs.getdbt.com/best-practices/best-practices#limit-the-data-processed-when-in-development) is a great general practice. It allows you to iterate quickly, and doesn’t waste resources. However, there are times when you need to test on a larger dataset for problems like disk spillage to come to the fore. Testing on large data is hard and expensive, so make sure you have a good idea of the solution before you commit to this step. 5. **Repeat** diff --git a/website/docs/guides/migration/versions/08-upgrading-to-v1.0.md b/website/docs/guides/migration/versions/08-upgrading-to-v1.0.md index 9fc7991c087..0ef9b8029d6 100644 --- a/website/docs/guides/migration/versions/08-upgrading-to-v1.0.md +++ b/website/docs/guides/migration/versions/08-upgrading-to-v1.0.md @@ -68,5 +68,5 @@ Several under-the-hood changes from past minor versions, tagged with deprecation - [Parsing](/reference/parsing): partial parsing and static parsing have been turned on by default. - [Global configs](/reference/global-configs/about-global-configs) have been standardized. Related updates to [global CLI flags](/reference/global-cli-flags) and [`profiles.yml`](/docs/core/connect-data-platform/profiles.yml). - [The `init` command](/reference/commands/init) has a whole new look and feel. It's no longer just for first-time users. -- Add `result:` subselectors for smarter reruns when dbt models have errors and tests fail. See examples: [Pro-tips for Workflows](/guides/legacy/best-practices#pro-tips-for-workflows) +- Add `result:` subselectors for smarter reruns when dbt models have errors and tests fail. See examples: [Pro-tips for Workflows](/best-practices/best-practices#pro-tips-for-workflows) - Secret-prefixed [env vars](/reference/dbt-jinja-functions/env_var) are now allowed only in `profiles.yml` + `packages.yml` diff --git a/website/docs/reference/node-selection/syntax.md b/website/docs/reference/node-selection/syntax.md index bb2aeefd742..34085d339e6 100644 --- a/website/docs/reference/node-selection/syntax.md +++ b/website/docs/reference/node-selection/syntax.md @@ -96,7 +96,7 @@ by comparing code in the current project against the state manifest. - [Deferring](/reference/node-selection/defer) to another environment, whereby dbt can identify upstream, unselected resources that don't exist in your current environment and instead "defer" their references to the environment provided by the state manifest. - The [`dbt clone` command](/reference/commands/clone), whereby dbt can clone nodes based on their location in the manifest provided to the `--state` flag. -Together, the `state:` selector and deferral enable ["slim CI"](/guides/legacy/best-practices#run-only-modified-models-to-test-changes-slim-ci). We expect to add more features in future releases that can leverage artifacts passed to the `--state` flag. +Together, the `state:` selector and deferral enable ["slim CI"](/best-practices/best-practices#run-only-modified-models-to-test-changes-slim-ci). We expect to add more features in future releases that can leverage artifacts passed to the `--state` flag. ### Establishing state @@ -190,7 +190,7 @@ dbt build --select "source_status:fresher+" ``` -For more example commands, refer to [Pro-tips for workflows](/guides/legacy/best-practices.md#pro-tips-for-workflows). +For more example commands, refer to [Pro-tips for workflows](/best-practices/best-practices.md#pro-tips-for-workflows). ### The "source_status" status diff --git a/website/docs/sql-reference/clauses/sql-limit.md b/website/docs/sql-reference/clauses/sql-limit.md index 74cc2e12123..d17affdef45 100644 --- a/website/docs/sql-reference/clauses/sql-limit.md +++ b/website/docs/sql-reference/clauses/sql-limit.md @@ -51,7 +51,7 @@ This simple query using the [Jaffle Shop’s](https://github.com/dbt-labs/jaffle After ensuring that this is the result you want from this query, you can omit the LIMIT in your final data model. :::tip Save money and time by limiting data in development -You could limit your data used for development by manually adding a LIMIT statement, a WHERE clause to your query, or by using a [dbt macro to automatically limit data based](https://docs.getdbt.com/guides/legacy/best-practices#limit-the-data-processed-when-in-development) on your development environment to help reduce your warehouse usage during dev periods. +You could limit your data used for development by manually adding a LIMIT statement, a WHERE clause to your query, or by using a [dbt macro to automatically limit data based](https://docs.getdbt.com/best-practices/best-practices#limit-the-data-processed-when-in-development) on your development environment to help reduce your warehouse usage during dev periods. ::: ## LIMIT syntax in Snowflake, Databricks, BigQuery, and Redshift