diff --git a/website/blog/2021-11-22-dbt-labs-pr-template.md b/website/blog/2021-11-22-dbt-labs-pr-template.md index 40d4960ac18..439a02371ec 100644 --- a/website/blog/2021-11-22-dbt-labs-pr-template.md +++ b/website/blog/2021-11-22-dbt-labs-pr-template.md @@ -70,7 +70,7 @@ Checking for things like modularity and 1:1 relationships between sources and st #### Validation of models: -This section should show something to confirm that your model is doing what you intended it to do. This could be a [dbt test](/docs/build/tests) like uniqueness or not null, or could be an ad-hoc query that you wrote to validate your data. Here is a screenshot from a test run on a local development branch: +This section should show something to confirm that your model is doing what you intended it to do. This could be a [dbt test](/docs/build/data-tests) like uniqueness or not null, or could be an ad-hoc query that you wrote to validate your data. Here is a screenshot from a test run on a local development branch: ![test validation](/img/blog/pr-template-test-validation.png "dbt test validation") diff --git a/website/blog/2021-11-22-primary-keys.md b/website/blog/2021-11-22-primary-keys.md index 84c92055eb0..d5f87cddd94 100644 --- a/website/blog/2021-11-22-primary-keys.md +++ b/website/blog/2021-11-22-primary-keys.md @@ -51,7 +51,7 @@ In the days before testing your data was commonplace, you often found out that y ## How to test primary keys with dbt -Today, you can add two simple [dbt tests](/docs/build/tests) onto your primary keys and feel secure that you are going to catch the vast majority of problems in your data. +Today, you can add two simple [dbt tests](/docs/build/data-tests) onto your primary keys and feel secure that you are going to catch the vast majority of problems in your data. Not surprisingly, these two tests correspond to the two most common errors found on your primary keys, and are usually the first tests that teams testing data with dbt implement: diff --git a/website/blog/2021-11-29-dbt-airflow-spiritual-alignment.md b/website/blog/2021-11-29-dbt-airflow-spiritual-alignment.md index b179c0f5c7c..d20c7d139d0 100644 --- a/website/blog/2021-11-29-dbt-airflow-spiritual-alignment.md +++ b/website/blog/2021-11-29-dbt-airflow-spiritual-alignment.md @@ -90,7 +90,7 @@ So instead of getting bogged down in defining roles, let’s focus on hard skill The common skills needed for implementing any flavor of dbt (Core or Cloud) are: * SQL: ‘nuff said -* YAML: required to generate config files for [writing tests on data models](/docs/build/tests) +* YAML: required to generate config files for [writing tests on data models](/docs/build/data-tests) * [Jinja](/guides/using-jinja): allows you to write DRY code (using [macros](/docs/build/jinja-macros), for loops, if statements, etc) YAML + Jinja can be learned pretty quickly, but SQL is the non-negotiable you’ll need to get started. diff --git a/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md b/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md index 8ea387cf00c..f3a24a0febd 100644 --- a/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md +++ b/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md @@ -87,7 +87,7 @@ The most important thing we’re introducing when your project is an infant is t * Introduce modularity with [{{ ref() }}](/reference/dbt-jinja-functions/ref) and [{{ source() }}](/reference/dbt-jinja-functions/source) -* [Document](/docs/collaborate/documentation) and [test](/docs/build/tests) your first models +* [Document](/docs/collaborate/documentation) and [test](/docs/build/data-tests) your first models ![image alt text](/img/blog/building-a-mature-dbt-project-from-scratch/image_3.png) diff --git a/website/blog/2022-04-19-complex-deduplication.md b/website/blog/2022-04-19-complex-deduplication.md index daacff4eec6..f33e6a8fe35 100644 --- a/website/blog/2022-04-19-complex-deduplication.md +++ b/website/blog/2022-04-19-complex-deduplication.md @@ -146,7 +146,7 @@ select * from filter_real_diffs > *What happens in this step? You check your data because you are thorough!* -Good thing dbt has already built this for you. Add a [unique test](/docs/build/tests#generic-tests) to your YAML model block for your `grain_id` in this de-duped staging model, and give it a dbt test! +Good thing dbt has already built this for you. Add a [unique test](/docs/build/data-tests#generic-data-tests) to your YAML model block for your `grain_id` in this de-duped staging model, and give it a dbt test! ```yaml models: diff --git a/website/blog/2022-09-28-analyst-to-ae.md b/website/blog/2022-09-28-analyst-to-ae.md index 7c8ccaeabec..bf19bbae59e 100644 --- a/website/blog/2022-09-28-analyst-to-ae.md +++ b/website/blog/2022-09-28-analyst-to-ae.md @@ -111,7 +111,7 @@ The analyst caught the issue because they have the appropriate context to valida An analyst is able to identify which areas do *not* need to be 100% accurate, which means they can also identify which areas *do* need to be 100% accurate. -> dbt makes it very quick to add [data quality tests](/docs/build/tests). In fact, it’s so quick, that it’ll take an analyst longer to write up what tests they want than it would take for an analyst to completely finish coding them. +> dbt makes it very quick to add [data quality tests](/docs/build/data-tests). In fact, it’s so quick, that it’ll take an analyst longer to write up what tests they want than it would take for an analyst to completely finish coding them. When data quality issues are identified by the business, we often see that analysts are the first ones to be asked: diff --git a/website/blog/2022-10-19-polyglot-dbt-python-dataframes-and-sql.md b/website/blog/2022-10-19-polyglot-dbt-python-dataframes-and-sql.md index bab92000a16..694f6ddc105 100644 --- a/website/blog/2022-10-19-polyglot-dbt-python-dataframes-and-sql.md +++ b/website/blog/2022-10-19-polyglot-dbt-python-dataframes-and-sql.md @@ -133,9 +133,9 @@ This model tries to parse the raw string value into a Python datetime. When not #### Testing the result -During the build process, dbt will check if any of the values are null. This is using the built-in [`not_null`](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-tests) test, which will generate and execute SQL in the data platform. +During the build process, dbt will check if any of the values are null. This is using the built-in [`not_null`](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-data-tests) test, which will generate and execute SQL in the data platform. -Our initial recommendation for testing Python models is to use [generic](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-tests) and [singular](https://docs.getdbt.com/docs/building-a-dbt-project/tests#singular-tests) tests. +Our initial recommendation for testing Python models is to use [generic](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-data-tests) and [singular](https://docs.getdbt.com/docs/building-a-dbt-project/tests#singular-data-tests) tests. ```yaml version: 2 diff --git a/website/blog/2023-01-24-aggregating-test-failures.md b/website/blog/2023-01-24-aggregating-test-failures.md index d82c202b376..2319da910a6 100644 --- a/website/blog/2023-01-24-aggregating-test-failures.md +++ b/website/blog/2023-01-24-aggregating-test-failures.md @@ -30,7 +30,7 @@ _It should be noted that this framework is for dbt v1.0+ on BigQuery. Small adap When we talk about high quality data tests, we aren’t just referencing high quality code, but rather the informational quality of our testing framework and their corresponding error messages. Originally, we theorized that any test that cannot be acted upon is a test that should not be implemented. Later, we realized there is a time and place for tests that should receive attention at a critical mass of failures. All we needed was a higher specificity system: tests should have an explicit severity ranking associated with them, equipped to filter out the noise of common, but low concern, failures. Each test should also mesh into established [RACI](https://project-management.com/understanding-responsibility-assignment-matrix-raci-matrix/) guidelines that state which groups tackle what failures, and what constitutes a critical mass. -To ensure that tests are always acted upon, we implement tests differently depending on the user groups that must act when a test fails. This led us to have two main classes of tests — Data Integrity Tests (called [Generic Tests](https://docs.getdbt.com/docs/build/tests) in dbt docs) and Context Driven Tests (called [Singular Tests](https://docs.getdbt.com/docs/build/tests#singular-tests) in dbt docs), with varying levels of severity across both test classes. +To ensure that tests are always acted upon, we implement tests differently depending on the user groups that must act when a test fails. This led us to have two main classes of tests — Data Integrity Tests (called [Generic Tests](https://docs.getdbt.com/docs/build/tests) in dbt docs) and Context Driven Tests (called [Singular Tests](https://docs.getdbt.com/docs/build/tests#singular-data-tests) in dbt docs), with varying levels of severity across both test classes. Data Integrity tests (Generic Tests)  are simple — they’re tests akin to a uniqueness check or not null constraint. These tests are usually actionable by the data platform team rather than subject matter experts. We define Data Integrity tests in our YAML files, similar to how they are [outlined by dbt’s documentation on generic tests](https://docs.getdbt.com/docs/build/tests). They look something like this — diff --git a/website/blog/2023-07-03-data-vault-2-0-with-dbt-cloud.md b/website/blog/2023-07-03-data-vault-2-0-with-dbt-cloud.md index 2a4879ac98d..6b1012a5320 100644 --- a/website/blog/2023-07-03-data-vault-2-0-with-dbt-cloud.md +++ b/website/blog/2023-07-03-data-vault-2-0-with-dbt-cloud.md @@ -143,7 +143,7 @@ To help you get started, [we have created a template GitHub project](https://git ### Entity Relation Diagrams (ERDs) and dbt -Data lineage is dbt's strength, but sometimes it's not enough to help you to understand the relationships between Data Vault components like a classic ERD would. There are a few open source packages to visualize the entities in your Data Vault built with dbt. I recommend checking out the [dbterd](https://dbterd.datnguyen.de/1.2/index.html) which turns your [dbt relationship data quality checks](https://docs.getdbt.com/docs/build/tests#generic-tests) into an ERD. +Data lineage is dbt's strength, but sometimes it's not enough to help you to understand the relationships between Data Vault components like a classic ERD would. There are a few open source packages to visualize the entities in your Data Vault built with dbt. I recommend checking out the [dbterd](https://dbterd.datnguyen.de/1.2/index.html) which turns your [dbt relationship data quality checks](https://docs.getdbt.com/docs/build/tests#generic-data-tests) into an ERD. ## Summary diff --git a/website/docs/best-practices/custom-generic-tests.md b/website/docs/best-practices/custom-generic-tests.md index f2d84e38853..e96fc864ee6 100644 --- a/website/docs/best-practices/custom-generic-tests.md +++ b/website/docs/best-practices/custom-generic-tests.md @@ -1,15 +1,15 @@ --- -title: "Writing custom generic tests" +title: "Writing custom generic data tests" id: "writing-custom-generic-tests" -description: Learn how to define your own custom generic tests. -displayText: Writing custom generic tests -hoverSnippet: Learn how to define your own custom generic tests. +description: Learn how to define your own custom generic data tests. +displayText: Writing custom generic data tests +hoverSnippet: Learn how to write your own custom generic data tests. --- -dbt ships with [Not Null](/reference/resource-properties/tests#not-null), [Unique](/reference/resource-properties/tests#unique), [Relationships](/reference/resource-properties/tests#relationships), and [Accepted Values](/reference/resource-properties/tests#accepted-values) generic tests. (These used to be called "schema tests," and you'll still see that name in some places.) Under the hood, these generic tests are defined as `test` blocks (like macros) in a globally accessible dbt project. You can find the source code for these tests in the [global project](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/include/global_project/macros/generic_test_sql). +dbt ships with [Not Null](/reference/resource-properties/data-tests#not-null), [Unique](/reference/resource-properties/data-tests#unique), [Relationships](/reference/resource-properties/data-tests#relationships), and [Accepted Values](/reference/resource-properties/data-tests#accepted-values) generic data tests. (These used to be called "schema tests," and you'll still see that name in some places.) Under the hood, these generic data tests are defined as `test` blocks (like macros) in a globally accessible dbt project. You can find the source code for these tests in the [global project](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/include/global_project/macros/generic_test_sql). :::info -There are tons of generic tests defined in open source packages, such as [dbt-utils](https://hub.getdbt.com/dbt-labs/dbt_utils/latest/) and [dbt-expectations](https://hub.getdbt.com/calogica/dbt_expectations/latest/) — the test you're looking for might already be here! +There are tons of generic data tests defined in open source packages, such as [dbt-utils](https://hub.getdbt.com/dbt-labs/dbt_utils/latest/) and [dbt-expectations](https://hub.getdbt.com/calogica/dbt_expectations/latest/) — the test you're looking for might already be here! ::: ### Generic tests with standard arguments diff --git a/website/docs/docs/build/tests.md b/website/docs/docs/build/data-tests.md similarity index 56% rename from website/docs/docs/build/tests.md rename to website/docs/docs/build/data-tests.md index 3d86dc6a81b..d981d7e272d 100644 --- a/website/docs/docs/build/tests.md +++ b/website/docs/docs/build/data-tests.md @@ -1,43 +1,43 @@ --- -title: "Add tests to your DAG" -sidebar_label: "Tests" -description: "Read this tutorial to learn how to use tests when building in dbt." +title: "Add data tests to your DAG" +sidebar_label: "Data tests" +description: "Read this tutorial to learn how to use data tests when building in dbt." search_weight: "heavy" -id: "tests" +id: "data-tests" keywords: - test, tests, testing, dag --- ## Related reference docs * [Test command](/reference/commands/test) -* [Test properties](/reference/resource-properties/tests) -* [Test configurations](/reference/test-configs) +* [Data test properties](/reference/resource-properties/data-tests) +* [Data test configurations](/reference/data-test-configs) * [Test selection examples](/reference/node-selection/test-selection-examples) ## Overview -Tests are assertions you make about your models and other resources in your dbt project (e.g. sources, seeds and snapshots). When you run `dbt test`, dbt will tell you if each test in your project passes or fails. +Data tests are assertions you make about your models and other resources in your dbt project (e.g. sources, seeds and snapshots). When you run `dbt test`, dbt will tell you if each test in your project passes or fails. -You can use tests to improve the integrity of the SQL in each model by making assertions about the results generated. Out of the box, you can test whether a specified column in a model only contains non-null values, unique values, or values that have a corresponding value in another model (for example, a `customer_id` for an `order` corresponds to an `id` in the `customers` model), and values from a specified list. You can extend tests to suit business logic specific to your organization – any assertion that you can make about your model in the form of a select query can be turned into a test. +You can use data tests to improve the integrity of the SQL in each model by making assertions about the results generated. Out of the box, you can test whether a specified column in a model only contains non-null values, unique values, or values that have a corresponding value in another model (for example, a `customer_id` for an `order` corresponds to an `id` in the `customers` model), and values from a specified list. You can extend data tests to suit business logic specific to your organization – any assertion that you can make about your model in the form of a select query can be turned into a data test. -Both types of tests return a set of failing records. Previously, generic/schema tests returned a numeric value representing failures. Generic tests (f.k.a. schema tests) are defined using `test` blocks instead of macros prefixed `test_`. +Data tests return a set of failing records. Generic data tests (f.k.a. schema tests) are defined using `test` blocks. -Like almost everything in dbt, tests are SQL queries. In particular, they are `select` statements that seek to grab "failing" records, ones that disprove your assertion. If you assert that a column is unique in a model, the test query selects for duplicates; if you assert that a column is never null, the test seeks after nulls. If the test returns zero failing rows, it passes, and your assertion has been validated. +Like almost everything in dbt, data tests are SQL queries. In particular, they are `select` statements that seek to grab "failing" records, ones that disprove your assertion. If you assert that a column is unique in a model, the test query selects for duplicates; if you assert that a column is never null, the test seeks after nulls. If the data test returns zero failing rows, it passes, and your assertion has been validated. -There are two ways of defining tests in dbt: -* A **singular** test is testing in its simplest form: If you can write a SQL query that returns failing rows, you can save that query in a `.sql` file within your [test directory](/reference/project-configs/test-paths). It's now a test, and it will be executed by the `dbt test` command. -* A **generic** test is a parameterized query that accepts arguments. The test query is defined in a special `test` block (like a [macro](jinja-macros)). Once defined, you can reference the generic test by name throughout your `.yml` files—define it on models, columns, sources, snapshots, and seeds. dbt ships with four generic tests built in, and we think you should use them! +There are two ways of defining data tests in dbt: +* A **singular** data test is testing in its simplest form: If you can write a SQL query that returns failing rows, you can save that query in a `.sql` file within your [test directory](/reference/project-configs/test-paths). It's now a data test, and it will be executed by the `dbt test` command. +* A **generic** data test is a parameterized query that accepts arguments. The test query is defined in a special `test` block (like a [macro](jinja-macros)). Once defined, you can reference the generic test by name throughout your `.yml` files—define it on models, columns, sources, snapshots, and seeds. dbt ships with four generic data tests built in, and we think you should use them! -Defining tests is a great way to confirm that your code is working correctly, and helps prevent regressions when your code changes. Because you can use them over and over again, making similar assertions with minor variations, generic tests tend to be much more common—they should make up the bulk of your dbt testing suite. That said, both ways of defining tests have their time and place. +Defining data tests is a great way to confirm that your outputs and inputs are as expected, and helps prevent regressions when your code changes. Because you can use them over and over again, making similar assertions with minor variations, generic data tests tend to be much more common—they should make up the bulk of your dbt data testing suite. That said, both ways of defining data tests have their time and place. -:::tip Creating your first tests +:::tip Creating your first data tests If you're new to dbt, we recommend that you check out our [quickstart guide](/guides) to build your first dbt project with models and tests. ::: -## Singular tests +## Singular data tests -The simplest way to define a test is by writing the exact SQL that will return failing records. We call these "singular" tests, because they're one-off assertions usable for a single purpose. +The simplest way to define a data test is by writing the exact SQL that will return failing records. We call these "singular" data tests, because they're one-off assertions usable for a single purpose. -These tests are defined in `.sql` files, typically in your `tests` directory (as defined by your [`test-paths` config](/reference/project-configs/test-paths)). You can use Jinja (including `ref` and `source`) in the test definition, just like you can when creating models. Each `.sql` file contains one `select` statement, and it defines one test: +These tests are defined in `.sql` files, typically in your `tests` directory (as defined by your [`test-paths` config](/reference/project-configs/test-paths)). You can use Jinja (including `ref` and `source`) in the test definition, just like you can when creating models. Each `.sql` file contains one `select` statement, and it defines one data test: @@ -56,10 +56,10 @@ having not(total_amount >= 0) The name of this test is the name of the file: `assert_total_payment_amount_is_positive`. Simple enough. -Singular tests are easy to write—so easy that you may find yourself writing the same basic structure over and over, only changing the name of a column or model. By that point, the test isn't so singular! In that case, we recommend... +Singular data tests are easy to write—so easy that you may find yourself writing the same basic structure over and over, only changing the name of a column or model. By that point, the test isn't so singular! In that case, we recommend... -## Generic tests -Certain tests are generic: they can be reused over and over again. A generic test is defined in a `test` block, which contains a parametrized query and accepts arguments. It might look like: +## Generic data tests +Certain data tests are generic: they can be reused over and over again. A generic data test is defined in a `test` block, which contains a parametrized query and accepts arguments. It might look like: ```sql {% test not_null(model, column_name) %} @@ -77,7 +77,7 @@ You'll notice that there are two arguments, `model` and `column_name`, which are If this is your first time working with adding properties to a resource, check out the docs on [declaring properties](/reference/configs-and-properties). ::: -Out of the box, dbt ships with four generic tests already defined: `unique`, `not_null`, `accepted_values` and `relationships`. Here's a full example using those tests on an `orders` model: +Out of the box, dbt ships with four generic data tests already defined: `unique`, `not_null`, `accepted_values` and `relationships`. Here's a full example using those tests on an `orders` model: ```yml version: 2 @@ -100,19 +100,19 @@ models: field: id ``` -In plain English, these tests translate to: +In plain English, these data tests translate to: * `unique`: the `order_id` column in the `orders` model should be unique * `not_null`: the `order_id` column in the `orders` model should not contain null values * `accepted_values`: the `status` column in the `orders` should be one of `'placed'`, `'shipped'`, `'completed'`, or `'returned'` * `relationships`: each `customer_id` in the `orders` model exists as an `id` in the `customers` (also known as referential integrity) -Behind the scenes, dbt constructs a `select` query for each test, using the parametrized query from the generic test block. These queries return the rows where your assertion is _not_ true; if the test returns zero rows, your assertion passes. +Behind the scenes, dbt constructs a `select` query for each data test, using the parametrized query from the generic test block. These queries return the rows where your assertion is _not_ true; if the test returns zero rows, your assertion passes. -You can find more information about these tests, and additional configurations (including [`severity`](/reference/resource-configs/severity) and [`tags`](/reference/resource-configs/tags)) in the [reference section](/reference/resource-properties/tests). +You can find more information about these data tests, and additional configurations (including [`severity`](/reference/resource-configs/severity) and [`tags`](/reference/resource-configs/tags)) in the [reference section](/reference/resource-properties/data-tests). -### More generic tests +### More generic data tests -Those four tests are enough to get you started. You'll quickly find you want to use a wider variety of tests—a good thing! You can also install generic tests from a package, or write your own, to use (and reuse) across your dbt project. Check out the [guide on custom generic tests](/best-practices/writing-custom-generic-tests) for more information. +Those four tests are enough to get you started. You'll quickly find you want to use a wider variety of tests—a good thing! You can also install generic data tests from a package, or write your own, to use (and reuse) across your dbt project. Check out the [guide on custom generic tests](/best-practices/writing-custom-generic-tests) for more information. :::info There are generic tests defined in some open source packages, such as [dbt-utils](https://hub.getdbt.com/dbt-labs/dbt_utils/latest/) and [dbt-expectations](https://hub.getdbt.com/calogica/dbt_expectations/latest/) — skip ahead to the docs on [packages](/docs/build/packages) to learn more! @@ -241,7 +241,7 @@ where {{ column_name }} is null ## Storing test failures -Normally, a test query will calculate failures as part of its execution. If you set the optional `--store-failures` flag, the [`store_failures`](/reference/resource-configs/store_failures), or the [`store_failures_as`](/reference/resource-configs/store_failures_as) configs, dbt will first save the results of a test query to a table in the database, and then query that table to calculate the number of failures. +Normally, a data test query will calculate failures as part of its execution. If you set the optional `--store-failures` flag, the [`store_failures`](/reference/resource-configs/store_failures), or the [`store_failures_as`](/reference/resource-configs/store_failures_as) configs, dbt will first save the results of a test query to a table in the database, and then query that table to calculate the number of failures. This workflow allows you to query and examine failing records much more quickly in development: diff --git a/website/docs/docs/build/jinja-macros.md b/website/docs/docs/build/jinja-macros.md index 135db740f75..074e648d410 100644 --- a/website/docs/docs/build/jinja-macros.md +++ b/website/docs/docs/build/jinja-macros.md @@ -23,7 +23,7 @@ Using Jinja turns your dbt project into a programming environment for SQL, givin In fact, if you've used the [`{{ ref() }}` function](/reference/dbt-jinja-functions/ref), you're already using Jinja! -Jinja can be used in any SQL in a dbt project, including [models](/docs/build/sql-models), [analyses](/docs/build/analyses), [tests](/docs/build/tests), and even [hooks](/docs/build/hooks-operations). +Jinja can be used in any SQL in a dbt project, including [models](/docs/build/sql-models), [analyses](/docs/build/analyses), [tests](/docs/build/data-tests), and even [hooks](/docs/build/hooks-operations). :::info Ready to get started with Jinja and macros? diff --git a/website/docs/docs/build/projects.md b/website/docs/docs/build/projects.md index a54f6042cce..c5e08177dee 100644 --- a/website/docs/docs/build/projects.md +++ b/website/docs/docs/build/projects.md @@ -14,7 +14,7 @@ At a minimum, all a project needs is the `dbt_project.yml` project configuration | [models](/docs/build/models) | Each model lives in a single file and contains logic that either transforms raw data into a dataset that is ready for analytics or, more often, is an intermediate step in such a transformation. | | [snapshots](/docs/build/snapshots) | A way to capture the state of your mutable tables so you can refer to it later. | | [seeds](/docs/build/seeds) | CSV files with static data that you can load into your data platform with dbt. | -| [tests](/docs/build/tests) | SQL queries that you can write to test the models and resources in your project. | +| [data tests](/docs/build/data-tests) | SQL queries that you can write to test the models and resources in your project. | | [macros](/docs/build/jinja-macros) | Blocks of code that you can reuse multiple times. | | [docs](/docs/collaborate/documentation) | Docs for your project that you can build. | | [sources](/docs/build/sources) | A way to name and describe the data loaded into your warehouse by your Extract and Load tools. | diff --git a/website/docs/docs/build/sources.md b/website/docs/docs/build/sources.md index a657b6257c9..466bcedc688 100644 --- a/website/docs/docs/build/sources.md +++ b/website/docs/docs/build/sources.md @@ -88,10 +88,10 @@ Using the `{{ source () }}` function also creates a dependency between the model ### Testing and documenting sources You can also: -- Add tests to sources +- Add data tests to sources - Add descriptions to sources, that get rendered as part of your documentation site -These should be familiar concepts if you've already added tests and descriptions to your models (if not check out the guides on [testing](/docs/build/tests) and [documentation](/docs/collaborate/documentation)). +These should be familiar concepts if you've already added tests and descriptions to your models (if not check out the guides on [testing](/docs/build/data-tests) and [documentation](/docs/collaborate/documentation)). diff --git a/website/docs/docs/build/sql-models.md b/website/docs/docs/build/sql-models.md index 237ac84c0c2..a0dd174278b 100644 --- a/website/docs/docs/build/sql-models.md +++ b/website/docs/docs/build/sql-models.md @@ -262,7 +262,7 @@ Additionally, the `ref` function encourages you to write modular transformations ## Testing and documenting models -You can also document and test models — skip ahead to the section on [testing](/docs/build/tests) and [documentation](/docs/collaborate/documentation) for more information. +You can also document and test models — skip ahead to the section on [testing](/docs/build/data-tests) and [documentation](/docs/collaborate/documentation) for more information. ## Additional FAQs diff --git a/website/docs/docs/collaborate/documentation.md b/website/docs/docs/collaborate/documentation.md index 16a4e610c70..1a989806851 100644 --- a/website/docs/docs/collaborate/documentation.md +++ b/website/docs/docs/collaborate/documentation.md @@ -15,7 +15,7 @@ pagination_prev: null ## Assumed knowledge -* [Tests](/docs/build/tests) +* [Tests](/docs/build/data-tests) ## Overview @@ -32,7 +32,7 @@ Here's an example docs site: ## Adding descriptions to your project -To add descriptions to your project, use the `description:` key in the same files where you declare [tests](/docs/build/tests), like so: +To add descriptions to your project, use the `description:` key in the same files where you declare [tests](/docs/build/data-tests), like so: diff --git a/website/docs/docs/collaborate/explore-projects.md b/website/docs/docs/collaborate/explore-projects.md index 78fe6f45cc7..ed5dee93317 100644 --- a/website/docs/docs/collaborate/explore-projects.md +++ b/website/docs/docs/collaborate/explore-projects.md @@ -149,7 +149,7 @@ An example of the details you might get for a model: - **Lineage** graph — The model’s lineage graph that you can interact with. The graph includes one parent node and one child node from the model. Click the Expand icon in the graph's upper right corner to view the model in full lineage graph mode. - **Description** section — A [description of the model](/docs/collaborate/documentation#adding-descriptions-to-your-project). - **Recent** section — Information on the last time the model ran, how long it ran for, whether the run was successful, the job ID, and the run ID. - - **Tests** section — [Tests](/docs/build/tests) for the model, including a status indicator for the latest test status. A :white_check_mark: denotes a passing test. + - **Tests** section — [Tests](/docs/build/data-tests) for the model, including a status indicator for the latest test status. A :white_check_mark: denotes a passing test. - **Details** section — Key properties like the model’s relation name (for example, how it’s represented and how you can query it in the data platform: `database.schema.identifier`); model governance attributes like access, group, and if contracted; and more. - **Relationships** section — The nodes the model **Depends On**, is **Referenced by**, and (if applicable) is **Used by** for projects that have declared the models' project as a dependency. - **Code** tab — The source code and compiled code for the model. diff --git a/website/docs/docs/collaborate/govern/model-contracts.md b/website/docs/docs/collaborate/govern/model-contracts.md index 342d86c1a77..8e7598f8e3b 100644 --- a/website/docs/docs/collaborate/govern/model-contracts.md +++ b/website/docs/docs/collaborate/govern/model-contracts.md @@ -183,9 +183,9 @@ Any model meeting the criteria described above _can_ define a contract. We recom A model's contract defines the **shape** of the returned dataset. If the model's logic or input data doesn't conform to that shape, the model does not build. -[Tests](/docs/build/tests) are a more flexible mechanism for validating the content of your model _after_ it's built. So long as you can write the query, you can run the test. Tests are more configurable, such as with [custom severity thresholds](/reference/resource-configs/severity). They are easier to debug after finding failures, because you can query the already-built model, or [store the failing records in the data warehouse](/reference/resource-configs/store_failures). +[Data Tests](/docs/build/data-tests) are a more flexible mechanism for validating the content of your model _after_ it's built. So long as you can write the query, you can run the data test. Data tests are more configurable, such as with [custom severity thresholds](/reference/resource-configs/severity). They are easier to debug after finding failures, because you can query the already-built model, or [store the failing records in the data warehouse](/reference/resource-configs/store_failures). -In some cases, you can replace a test with its equivalent constraint. This has the advantage of guaranteeing the validation at build time, and it probably requires less compute (cost) in your data platform. The prerequisites for replacing a test with a constraint are: +In some cases, you can replace a data test with its equivalent constraint. This has the advantage of guaranteeing the validation at build time, and it probably requires less compute (cost) in your data platform. The prerequisites for replacing a data test with a constraint are: - Making sure that your data platform can support and enforce the constraint that you need. Most platforms only enforce `not_null`. - Materializing your model as `table` or `incremental` (**not** `view` or `ephemeral`). - Defining a full contract for this model by specifying the `name` and `data_type` of each column. diff --git a/website/docs/docs/dbt-versions/core-upgrade/03-upgrading-to-dbt-utils-v1.0.md b/website/docs/docs/dbt-versions/core-upgrade/03-upgrading-to-dbt-utils-v1.0.md index a7b302c9a58..229a54627fc 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/03-upgrading-to-dbt-utils-v1.0.md +++ b/website/docs/docs/dbt-versions/core-upgrade/03-upgrading-to-dbt-utils-v1.0.md @@ -82,7 +82,7 @@ models: # ...with this... where: "created_at > '2018-12-31'" ``` -**Note** — This may cause some tests to get the same autogenerated names. To resolve this, you can [define a custom name for a test](/reference/resource-properties/tests#define-a-custom-name-for-one-test). +**Note** — This may cause some tests to get the same autogenerated names. To resolve this, you can [define a custom name for a test](/reference/resource-properties/data-tests#define-a-custom-name-for-one-test). - The deprecated `unique_where` and `not_null_where` tests have been removed, because [where is now available natively to all tests](https://docs.getdbt.com/reference/resource-configs/where). To migrate, find and replace `dbt_utils.unique_where` with `unique` and `dbt_utils.not_null_where` with `not_null`. - `dbt_utils.current_timestamp()` is replaced by `dbt.current_timestamp()`. - Note that Postgres and Snowflake’s implementation of `dbt.current_timestamp()` differs from the old `dbt_utils` one ([full details here](https://github.com/dbt-labs/dbt-utils/pull/597#issuecomment-1231074577)). If you use Postgres or Snowflake and need identical backwards-compatible behavior, use `dbt.current_timestamp_backcompat()`. This discrepancy will hopefully be reconciled in a future version of dbt Core. diff --git a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md index 12f0f42354a..868f3c7ed04 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md +++ b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.1.md @@ -43,7 +43,7 @@ Expected a schema version of "https://schemas.getdbt.com/dbt/manifest/v5.json" i [**Incremental models**](/docs/build/incremental-models) can now accept a list of multiple columns as their `unique_key`, for models that need a combination of columns to uniquely identify each row. This is supported by the most common data warehouses, for incremental strategies that make use of the `unique_key` config (`merge` and `delete+insert`). -[**Generic tests**](/reference/resource-properties/tests) can define custom names. This is useful to "prettify" the synthetic name that dbt applies automatically. It's needed to disambiguate the case when the same generic test is defined multiple times with different configurations. +[**Generic tests**](/reference/resource-properties/data-tests) can define custom names. This is useful to "prettify" the synthetic name that dbt applies automatically. It's needed to disambiguate the case when the same generic test is defined multiple times with different configurations. [**Sources**](/reference/source-properties) can define configuration inline with other `.yml` properties, just like other resource types. The only supported config is `enabled`; you can use this to dynamically enable/disable sources based on environment or package variables. diff --git a/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md b/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md index 6e437638ef6..0460186551d 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md +++ b/website/docs/docs/dbt-versions/core-upgrade/08-upgrading-to-v1.0.md @@ -34,7 +34,7 @@ dbt Core major version 1.0 includes a number of breaking changes! Wherever possi ### Tests -The two **test types** are now "singular" and "generic" (instead of "data" and "schema", respectively). The `test_type:` selection method accepts `test_type:singular` and `test_type:generic`. (It will also accept `test_type:schema` and `test_type:data` for backwards compatibility.) **Not backwards compatible:** The `--data` and `--schema` flags to dbt test are no longer supported, and tests no longer have the tags `'data'` and `'schema'` automatically applied. Updated docs: [tests](/docs/build/tests), [test selection](/reference/node-selection/test-selection-examples), [selection methods](/reference/node-selection/methods). +The two **test types** are now "singular" and "generic" (instead of "data" and "schema", respectively). The `test_type:` selection method accepts `test_type:singular` and `test_type:generic`. (It will also accept `test_type:schema` and `test_type:data` for backwards compatibility.) **Not backwards compatible:** The `--data` and `--schema` flags to dbt test are no longer supported, and tests no longer have the tags `'data'` and `'schema'` automatically applied. Updated docs: [tests](/docs/build/data-tests), [test selection](/reference/node-selection/test-selection-examples), [selection methods](/reference/node-selection/methods). The `greedy` flag/property has been renamed to **`indirect_selection`**, which is now eager by default. **Note:** This reverts test selection to its pre-v0.20 behavior by default. `dbt test -s my_model` _will_ select multi-parent tests, such as `relationships`, that depend on unselected resources. To achieve the behavior change in v0.20 + v0.21, set `--indirect-selection=cautious` on the CLI or `indirect_selection: cautious` in YAML selectors. Updated docs: [test selection examples](/reference/node-selection/test-selection-examples), [yaml selectors](/reference/node-selection/yaml-selectors). diff --git a/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v0.20.md b/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v0.20.md index 9ff5695d5dc..be6054087b3 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v0.20.md +++ b/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v0.20.md @@ -29,9 +29,9 @@ dbt Core v0.20 has reached the end of critical support. No new patch versions wi ### Tests -- [Building a dbt Project: tests](/docs/build/tests) -- [Test Configs](/reference/test-configs) -- [Test properties](/reference/resource-properties/tests) +- [Building a dbt Project: tests](/docs/build/data-tests) +- [Test Configs](/reference/data-test-configs) +- [Test properties](/reference/resource-properties/data-tests) - [Node Selection](/reference/node-selection/syntax) (with updated [test selection examples](/reference/node-selection/test-selection-examples)) - [Writing custom generic tests](/best-practices/writing-custom-generic-tests) diff --git a/website/docs/docs/introduction.md b/website/docs/docs/introduction.md index c575a9ae657..08564aeb2f0 100644 --- a/website/docs/docs/introduction.md +++ b/website/docs/docs/introduction.md @@ -56,7 +56,7 @@ As a dbt user, your main focus will be on writing models (i.e. select queries) t | Use a code compiler | SQL files can contain Jinja, a lightweight templating language. Using Jinja in SQL provides a way to use control structures in your queries. For example, `if` statements and `for` loops. It also enables repeated SQL to be shared through `macros`. Read more about [Macros](/docs/build/jinja-macros).| | Determine the order of model execution | Often, when transforming data, it makes sense to do so in a staged approach. dbt provides a mechanism to implement transformations in stages through the [ref function](/reference/dbt-jinja-functions/ref). Rather than selecting from existing tables and views in your warehouse, you can select from another model.| | Document your dbt project | dbt provides a mechanism to write, version-control, and share documentation for your dbt models. You can write descriptions (in plain text or markdown) for each model and field. In dbt Cloud, you can auto-generate the documentation when your dbt project runs. Read more about the [Documentation](/docs/collaborate/documentation).| -| Test your models | Tests provide a way to improve the integrity of the SQL in each model by making assertions about the results generated by a model. Read more about writing tests for your models [Testing](/docs/build/tests)| +| Test your models | Tests provide a way to improve the integrity of the SQL in each model by making assertions about the results generated by a model. Read more about writing tests for your models [Testing](/docs/build/data-tests)| | Manage packages | dbt ships with a package manager, which allows analysts to use and publish both public and private repositories of dbt code which can then be referenced by others. Read more about [Package Management](/docs/build/packages). | | Load seed files| Often in analytics, raw values need to be mapped to a more readable value (for example, converting a country-code to a country name) or enriched with static or infrequently changing data. These data sources, known as seed files, can be saved as a CSV file in your `project` and loaded into your data warehouse using the `seed` command. Read more about [Seeds](/docs/build/seeds).| | Snapshot data | Often, records in a data source are mutable, in that they change over time. This can be difficult to handle in analytics if you want to reconstruct historic values. dbt provides a mechanism to snapshot raw data for a point in time, through use of [snapshots](/docs/build/snapshots).| diff --git a/website/docs/faqs/Models/specifying-column-types.md b/website/docs/faqs/Models/specifying-column-types.md index 8e8379c4ec1..904c616d89a 100644 --- a/website/docs/faqs/Models/specifying-column-types.md +++ b/website/docs/faqs/Models/specifying-column-types.md @@ -38,6 +38,6 @@ So long as your model queries return the correct column type, the table you crea To define additional column options: -* Rather than enforcing uniqueness and not-null constraints on your column, use dbt's [testing](/docs/build/tests) functionality to check that your assertions about your model hold true. +* Rather than enforcing uniqueness and not-null constraints on your column, use dbt's [data testing](/docs/build/data-tests) functionality to check that your assertions about your model hold true. * Rather than creating default values for a column, use SQL to express defaults (e.g. `coalesce(updated_at, current_timestamp()) as updated_at`) * In edge-cases where you _do_ need to alter a column (e.g. column-level encoding on Redshift), consider implementing this via a [post-hook](/reference/resource-configs/pre-hook-post-hook). diff --git a/website/docs/faqs/Project/properties-not-in-config.md b/website/docs/faqs/Project/properties-not-in-config.md index d1aea32b687..76de58404a9 100644 --- a/website/docs/faqs/Project/properties-not-in-config.md +++ b/website/docs/faqs/Project/properties-not-in-config.md @@ -16,7 +16,7 @@ Certain properties are special, because: These properties are: - [`description`](/reference/resource-properties/description) -- [`tests`](/reference/resource-properties/tests) +- [`tests`](/reference/resource-properties/data-tests) - [`docs`](/reference/resource-configs/docs) - `columns` - [`quote`](/reference/resource-properties/quote) diff --git a/website/docs/faqs/Tests/available-tests.md b/website/docs/faqs/Tests/available-tests.md index f08e6841bd0..2b5fd3ff55c 100644 --- a/website/docs/faqs/Tests/available-tests.md +++ b/website/docs/faqs/Tests/available-tests.md @@ -12,6 +12,6 @@ Out of the box, dbt ships with the following tests: * `accepted_values` * `relationships` (i.e. referential integrity) -You can also write your own [custom schema tests](/docs/build/tests). +You can also write your own [custom schema data tests](/docs/build/data-tests). Some additional custom schema tests have been open-sourced in the [dbt-utils package](https://github.com/dbt-labs/dbt-utils/tree/0.2.4/#schema-tests), check out the docs on [packages](/docs/build/packages) to learn how to make these tests available in your project. diff --git a/website/docs/faqs/Tests/custom-test-thresholds.md b/website/docs/faqs/Tests/custom-test-thresholds.md index 34d2eec7494..400a5b4e28b 100644 --- a/website/docs/faqs/Tests/custom-test-thresholds.md +++ b/website/docs/faqs/Tests/custom-test-thresholds.md @@ -10,5 +10,5 @@ As of `v0.20.0`, you can use the `error_if` and `warn_if` configs to set custom For dbt `v0.19.0` and earlier, you could try these possible solutions: -* Setting the [severity](/reference/resource-properties/tests#severity) to `warn`, or: +* Setting the [severity](/reference/resource-properties/data-tests#severity) to `warn`, or: * Writing a [custom generic test](/best-practices/writing-custom-generic-tests) that accepts a threshold argument ([example](https://discourse.getdbt.com/t/creating-an-error-threshold-for-schema-tests/966)) diff --git a/website/docs/guides/building-packages.md b/website/docs/guides/building-packages.md index 641a1c6af6d..55f0c2ed912 100644 --- a/website/docs/guides/building-packages.md +++ b/website/docs/guides/building-packages.md @@ -104,7 +104,7 @@ dbt makes it possible for users of your package to override your model properties defined in a `.yml` file --> config defined in the project file. -Note - Generic tests work a little differently when it comes to specificity. See [test configs](/reference/test-configs). +Note - Generic data tests work a little differently when it comes to specificity. See [test configs](/reference/data-test-configs). Within the project file, configurations are also applied hierarchically. The most specific config always "wins": In the project file, configurations applied to a `marketing` subdirectory will take precedence over configurations applied to the entire `jaffle_shop` project. To apply a configuration to a model, or directory of models, define the resource path as nested dictionary keys. @@ -76,7 +76,7 @@ Certain properties are special, because: These properties are: - [`description`](/reference/resource-properties/description) -- [`tests`](/reference/resource-properties/tests) +- [`tests`](/reference/resource-properties/data-tests) - [`docs`](/reference/resource-configs/docs) - [`columns`](/reference/resource-properties/columns) - [`quote`](/reference/resource-properties/quote) diff --git a/website/docs/reference/test-configs.md b/website/docs/reference/data-test-configs.md similarity index 87% rename from website/docs/reference/test-configs.md rename to website/docs/reference/data-test-configs.md index 960e8d5471a..5f922d08c6b 100644 --- a/website/docs/reference/test-configs.md +++ b/website/docs/reference/data-test-configs.md @@ -1,8 +1,8 @@ --- -title: Test configurations -description: "Read this guide to learn about using test configurations in dbt." +title: Data test configurations +description: "Read this guide to learn about using data test configurations in dbt." meta: - resource_type: Tests + resource_type: Data tests --- import ConfigResource from '/snippets/_config-description-resource.md'; import ConfigGeneral from '/snippets/_config-description-general.md'; @@ -10,20 +10,20 @@ import ConfigGeneral from '/snippets/_config-description-general.md'; ## Related documentation -* [Tests](/docs/build/tests) +* [Data tests](/docs/build/data-tests) -Tests can be configured in a few different ways: -1. Properties within `.yml` definition (generic tests only, see [test properties](/reference/resource-properties/tests) for full syntax) +Data tests can be configured in a few different ways: +1. Properties within `.yml` definition (generic tests only, see [test properties](/reference/resource-properties/data-tests) for full syntax) 2. A `config()` block within the test's SQL definition 3. In `dbt_project.yml` -Test configs are applied hierarchically, in the order of specificity outlined above. In the case of a singular test, the `config()` block within the SQL definition takes precedence over configs in the project file. In the case of a specific instance of a generic test, the test's `.yml` properties would take precedence over any values set in its generic SQL definition's `config()`, which in turn would take precedence over values set in `dbt_project.yml`. +Data test configs are applied hierarchically, in the order of specificity outlined above. In the case of a singular test, the `config()` block within the SQL definition takes precedence over configs in the project file. In the case of a specific instance of a generic test, the test's `.yml` properties would take precedence over any values set in its generic SQL definition's `config()`, which in turn would take precedence over values set in `dbt_project.yml`. ## Available configurations Click the link on each configuration option to read more about what it can do. -### Test-specific configurations +### Data test-specific configurations @@ -204,7 +204,7 @@ version: 2 [alias](/reference/resource-configs/alias): ``` -This configuration mechanism is supported for specific instances of generic tests only. To configure a specific singular test, you should use the `config()` macro in its SQL definition. +This configuration mechanism is supported for specific instances of generic data tests only. To configure a specific singular test, you should use the `config()` macro in its SQL definition. @@ -216,7 +216,7 @@ This configuration mechanism is supported for specific instances of generic test #### Add a tag to one test -If a specific instance of a generic test: +If a specific instance of a generic data test: @@ -232,7 +232,7 @@ models: -If a singular test: +If a singular data test: @@ -244,7 +244,7 @@ select ... -#### Set the default severity for all instances of a generic test +#### Set the default severity for all instances of a generic data test @@ -260,7 +260,7 @@ select ... -#### Disable all tests from a package +#### Disable all data tests from a package diff --git a/website/docs/reference/dbt_project.yml.md b/website/docs/reference/dbt_project.yml.md index 34af0f696c7..7b5d54c3e03 100644 --- a/website/docs/reference/dbt_project.yml.md +++ b/website/docs/reference/dbt_project.yml.md @@ -81,7 +81,7 @@ sources: [](source-configs) tests: - [](/reference/test-configs) + [](/reference/data-test-configs) vars: [](/docs/build/project-variables) @@ -153,7 +153,7 @@ sources: [](source-configs) tests: - [](/reference/test-configs) + [](/reference/data-test-configs) vars: [](/docs/build/project-variables) @@ -222,7 +222,7 @@ sources: [](source-configs) tests: - [](/reference/test-configs) + [](/reference/data-test-configs) vars: [](/docs/build/project-variables) diff --git a/website/docs/reference/model-properties.md b/website/docs/reference/model-properties.md index 63adc1f0d63..65f9307b5b3 100644 --- a/website/docs/reference/model-properties.md +++ b/website/docs/reference/model-properties.md @@ -23,9 +23,9 @@ models: [](/reference/model-configs): [constraints](/reference/resource-properties/constraints): - - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - - ... # declare additional tests + - ... # declare additional data tests [columns](/reference/resource-properties/columns): - name: # required [description](/reference/resource-properties/description): @@ -33,9 +33,9 @@ models: [quote](/reference/resource-properties/quote): true | false [constraints](/reference/resource-properties/constraints): - - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - - ... # declare additional tests + - ... # declare additional data tests [tags](/reference/resource-configs/tags): [] - name: ... # declare properties of additional columns @@ -51,9 +51,9 @@ models: - [config](/reference/resource-properties/config): [](/reference/model-configs): - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - - ... # declare additional tests + - ... # declare additional data tests columns: # include/exclude columns from the top-level model properties - [include](/reference/resource-properties/include-exclude): @@ -63,9 +63,9 @@ models: [quote](/reference/resource-properties/quote): true | false [constraints](/reference/resource-properties/constraints): - - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - - ... # declare additional tests + - ... # declare additional data tests [tags](/reference/resource-configs/tags): [] - v: ... # declare additional versions diff --git a/website/docs/reference/node-selection/methods.md b/website/docs/reference/node-selection/methods.md index e29612e3401..2ffe0ea599e 100644 --- a/website/docs/reference/node-selection/methods.md +++ b/website/docs/reference/node-selection/methods.md @@ -173,7 +173,7 @@ dbt test --select "test_type:singular" # run all singular tests The `test_name` method is used to select tests based on the name of the generic test that defines it. For more information about how generic tests are defined, read about -[tests](/docs/build/tests). +[tests](/docs/build/data-tests). ```bash diff --git a/website/docs/reference/project-configs/test-paths.md b/website/docs/reference/project-configs/test-paths.md index e3d0e0b76fa..59f17db05eb 100644 --- a/website/docs/reference/project-configs/test-paths.md +++ b/website/docs/reference/project-configs/test-paths.md @@ -13,7 +13,7 @@ test-paths: [directorypath] ## Definition -Optionally specify a custom list of directories where [singular tests](/docs/build/tests) are located. +Optionally specify a custom list of directories where [singular tests](/docs/build/data-tests) are located. ## Default diff --git a/website/docs/reference/resource-configs/alias.md b/website/docs/reference/resource-configs/alias.md index 6b7588ecaf7..e1d3ae41f8b 100644 --- a/website/docs/reference/resource-configs/alias.md +++ b/website/docs/reference/resource-configs/alias.md @@ -112,7 +112,7 @@ When using `--store-failures`, this would return the name `analytics.finance.ord ## Definition -Optionally specify a custom alias for a [model](/docs/build/models), [tests](/docs/build/tests), [snapshots](/docs/build/snapshots), or [seed](/docs/build/seeds). +Optionally specify a custom alias for a [model](/docs/build/models), [data test](/docs/build/data-tests), [snapshot](/docs/build/snapshots), or [seed](/docs/build/seeds). When dbt creates a relation (/) in a database, it creates it as: `{{ database }}.{{ schema }}.{{ identifier }}`, e.g. `analytics.finance.payments` diff --git a/website/docs/reference/resource-configs/database.md b/website/docs/reference/resource-configs/database.md index 7d91358ff01..19c9eca272d 100644 --- a/website/docs/reference/resource-configs/database.md +++ b/website/docs/reference/resource-configs/database.md @@ -70,7 +70,7 @@ This would result in the generated relation being located in the `reporting` dat ## Definition -Optionally specify a custom database for a [model](/docs/build/sql-models), [seed](/docs/build/seeds), or [tests](/docs/build/tests). (To specify a database for a [snapshot](/docs/build/snapshots), use the [`target_database` config](/reference/resource-configs/target_database)). +Optionally specify a custom database for a [model](/docs/build/sql-models), [seed](/docs/build/seeds), or [data test](/docs/build/data-tests). (To specify a database for a [snapshot](/docs/build/snapshots), use the [`target_database` config](/reference/resource-configs/target_database)). When dbt creates a relation (/) in a database, it creates it as: `{{ database }}.{{ schema }}.{{ identifier }}`, e.g. `analytics.finance.payments` diff --git a/website/docs/reference/resource-configs/databricks-configs.md b/website/docs/reference/resource-configs/databricks-configs.md index fb3c9e7f5c3..677cad57ce6 100644 --- a/website/docs/reference/resource-configs/databricks-configs.md +++ b/website/docs/reference/resource-configs/databricks-configs.md @@ -38,9 +38,9 @@ When materializing a model as `table`, you may include several optional configs ## Incremental models dbt-databricks plugin leans heavily on the [`incremental_strategy` config](/docs/build/incremental-models#about-incremental_strategy). This config tells the incremental materialization how to build models in runs beyond their first. It can be set to one of four values: - - **`append`** (default): Insert new records without updating or overwriting any existing data. + - **`append`**: Insert new records without updating or overwriting any existing data. - **`insert_overwrite`**: If `partition_by` is specified, overwrite partitions in the with new data. If no `partition_by` is specified, overwrite the entire table with new data. - - **`merge`** (Delta and Hudi file format only): Match records based on a `unique_key`, updating old records, and inserting new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.) + - **`merge`** (default; Delta and Hudi file format only): Match records based on a `unique_key`, updating old records, and inserting new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.) - **`replace_where`** (Delta file format only): Match records based on `incremental_predicates`, replacing all records that match the predicates from the existing table with records matching the predicates from the new data. (If no `incremental_predicates` are specified, all new data is inserted, similar to `append`.) Each of these strategies has its pros and cons, which we'll discuss below. As with any model config, `incremental_strategy` may be specified in `dbt_project.yml` or within a model file's `config()` block. @@ -49,8 +49,6 @@ Each of these strategies has its pros and cons, which we'll discuss below. As wi Following the `append` strategy, dbt will perform an `insert into` statement with all new data. The appeal of this strategy is that it is straightforward and functional across all platforms, file types, connection methods, and Apache Spark versions. However, this strategy _cannot_ update, overwrite, or delete existing data, so it is likely to insert duplicate records for many data sources. -Specifying `append` as the incremental strategy is optional, since it's the default strategy used when none is specified. - -You can't add YAML `meta` configs for [generic tests](/docs/build/tests#generic-tests). However, you can add `meta` properties to [singular tests](/docs/build/tests#singular-tests) using `config()` at the top of the test file. +You can't add YAML `meta` configs for [generic tests](/docs/build/data-tests#generic-data-tests). However, you can add `meta` properties to [singular tests](/docs/build/data-tests#singular-data-tests) using `config()` at the top of the test file. diff --git a/website/docs/reference/resource-configs/postgres-configs.md b/website/docs/reference/resource-configs/postgres-configs.md index 8465a5cbb31..fcc0d91a47c 100644 --- a/website/docs/reference/resource-configs/postgres-configs.md +++ b/website/docs/reference/resource-configs/postgres-configs.md @@ -10,16 +10,16 @@ In dbt-postgres, the following incremental materialization strategies are suppor -- `append` (default) -- `delete+insert` +- `append` (default when `unique_key` is not defined) +- `delete+insert` (default when `unique_key` is defined) -- `append` (default) +- `append` (default when `unique_key` is not defined) - `merge` -- `delete+insert` +- `delete+insert` (default when `unique_key` is defined) diff --git a/website/docs/reference/resource-configs/redshift-configs.md b/website/docs/reference/resource-configs/redshift-configs.md index b559c0451b0..85b2af0c552 100644 --- a/website/docs/reference/resource-configs/redshift-configs.md +++ b/website/docs/reference/resource-configs/redshift-configs.md @@ -16,16 +16,16 @@ In dbt-redshift, the following incremental materialization strategies are suppor -- `append` (default) -- `delete+insert` - +- `append` (default when `unique_key` is not defined) +- `delete+insert` (default when `unique_key` is defined) + -- `append` (default) +- `append` (default when `unique_key` is not defined) - `merge` -- `delete+insert` +- `delete+insert` (default when `unique_key` is defined) diff --git a/website/docs/reference/resource-configs/store_failures_as.md b/website/docs/reference/resource-configs/store_failures_as.md index a9149360089..dd61030afb8 100644 --- a/website/docs/reference/resource-configs/store_failures_as.md +++ b/website/docs/reference/resource-configs/store_failures_as.md @@ -17,7 +17,7 @@ You can configure it in all the same places as `store_failures`, including singu #### Singular test -[Singular test](https://docs.getdbt.com/docs/build/tests#singular-tests) in `tests/singular/check_something.sql` file +[Singular test](https://docs.getdbt.com/docs/build/tests#singular-data-tests) in `tests/singular/check_something.sql` file ```sql {{ config(store_failures_as="table") }} @@ -29,7 +29,7 @@ where 1=0 #### Generic test -[Generic tests](https://docs.getdbt.com/docs/build/tests#generic-tests) in `models/_models.yml` file +[Generic tests](https://docs.getdbt.com/docs/build/tests#generic-data-tests) in `models/_models.yml` file ```yaml models: @@ -70,7 +70,7 @@ As with most other configurations, `store_failures_as` is "clobbered" when appli Additional resources: -- [Test configurations](/reference/test-configs#related-documentation) -- [Test-specific configurations](/reference/test-configs#test-specific-configurations) +- [Data test configurations](/reference/data-test-configs#related-documentation) +- [Data test-specific configurations](/reference/data-test-configs#test-data-specific-configurations) - [Configuring directories of models in dbt_project.yml](/reference/model-configs#configuring-directories-of-models-in-dbt_projectyml) - [Config inheritance](/reference/configs-and-properties#config-inheritance) \ No newline at end of file diff --git a/website/docs/reference/resource-properties/columns.md b/website/docs/reference/resource-properties/columns.md index ff8aa8734c6..74727977feb 100644 --- a/website/docs/reference/resource-properties/columns.md +++ b/website/docs/reference/resource-properties/columns.md @@ -28,7 +28,7 @@ models: data_type: [description](/reference/resource-properties/description): [quote](/reference/resource-properties/quote): true | false - [tests](/reference/resource-properties/tests): ... + [tests](/reference/resource-properties/data-tests): ... [tags](/reference/resource-configs/tags): ... [meta](/reference/resource-configs/meta): ... - name: @@ -55,7 +55,7 @@ sources: [description](/reference/resource-properties/description): data_type: [quote](/reference/resource-properties/quote): true | false - [tests](/reference/resource-properties/tests): ... + [tests](/reference/resource-properties/data-tests): ... [tags](/reference/resource-configs/tags): ... [meta](/reference/resource-configs/meta): ... - name: @@ -81,7 +81,7 @@ seeds: [description](/reference/resource-properties/description): data_type: [quote](/reference/resource-properties/quote): true | false - [tests](/reference/resource-properties/tests): ... + [tests](/reference/resource-properties/data-tests): ... [tags](/reference/resource-configs/tags): ... [meta](/reference/resource-configs/meta): ... - name: @@ -106,7 +106,7 @@ snapshots: [description](/reference/resource-properties/description): data_type: [quote](/reference/resource-properties/quote): true | false - [tests](/reference/resource-properties/tests): ... + [tests](/reference/resource-properties/data-tests): ... [tags](/reference/resource-configs/tags): ... [meta](/reference/resource-configs/meta): ... - name: diff --git a/website/docs/reference/resource-properties/config.md b/website/docs/reference/resource-properties/config.md index 55d2f64d9ff..89d189d8a78 100644 --- a/website/docs/reference/resource-properties/config.md +++ b/website/docs/reference/resource-properties/config.md @@ -98,7 +98,7 @@ version: 2 - [](#test_name): : config: - [](/reference/test-configs): + [](/reference/data-test-configs): ... ``` diff --git a/website/docs/reference/resource-properties/tests.md b/website/docs/reference/resource-properties/data-tests.md similarity index 83% rename from website/docs/reference/resource-properties/tests.md rename to website/docs/reference/resource-properties/data-tests.md index 0fe86ccc57d..ce557ebeb4f 100644 --- a/website/docs/reference/resource-properties/tests.md +++ b/website/docs/reference/resource-properties/data-tests.md @@ -1,8 +1,8 @@ --- -title: "About tests property" -sidebar_label: "tests" +title: "About data tests property" +sidebar_label: "Data tests" resource_types: all -datatype: test +datatype: data-test keywords: [test, tests, custom tests, custom test name, test name] --- @@ -30,7 +30,7 @@ models: - [](#test_name): : [config](/reference/resource-properties/config): - [](/reference/test-configs): + [](/reference/data-test-configs): [columns](/reference/resource-properties/columns): - name: @@ -39,7 +39,7 @@ models: - [](#test_name): : [config](/reference/resource-properties/config): - [](/reference/test-configs): + [](/reference/data-test-configs): ``` @@ -62,7 +62,7 @@ sources: - [](#test_name): : [config](/reference/resource-properties/config): - [](/reference/test-configs): + [](/reference/data-test-configs): columns: - name: @@ -71,7 +71,7 @@ sources: - [](#test_name): : [config](/reference/resource-properties/config): - [](/reference/test-configs): + [](/reference/data-test-configs): ``` @@ -93,7 +93,7 @@ seeds: - [](#test_name): : [config](/reference/resource-properties/config): - [](/reference/test-configs): + [](/reference/data-test-configs): columns: - name: @@ -102,7 +102,7 @@ seeds: - [](#test_name): : [config](/reference/resource-properties/config): - [](/reference/test-configs): + [](/reference/data-test-configs): ``` @@ -124,7 +124,7 @@ snapshots: - [](#test_name): : [config](/reference/resource-properties/config): - [](/reference/test-configs): + [](/reference/data-test-configs): columns: - name: @@ -133,7 +133,7 @@ snapshots: - [](#test_name): : [config](/reference/resource-properties/config): - [](/reference/test-configs): + [](/reference/data-test-configs): ``` @@ -152,17 +152,17 @@ This feature is not implemented for analyses. ## Related documentation -* [Testing guide](/docs/build/tests) +* [Data testing guide](/docs/build/data-tests) ## Description -The `tests` property defines assertions about a column, , or . The property contains a list of [generic tests](/docs/build/tests#generic-tests), referenced by name, which can include the four built-in generic tests available in dbt. For example, you can add tests that ensure a column contains no duplicates and zero null values. Any arguments or [configurations](/reference/test-configs) passed to those tests should be nested below the test name. +The data `tests` property defines assertions about a column, , or . The property contains a list of [generic tests](/docs/build/data-tests#generic-data-tests), referenced by name, which can include the four built-in generic tests available in dbt. For example, you can add tests that ensure a column contains no duplicates and zero null values. Any arguments or [configurations](/reference/data-test-configs) passed to those tests should be nested below the test name. Once these tests are defined, you can validate their correctness by running `dbt test`. -## Out-of-the-box tests +## Out-of-the-box data tests -There are four generic tests that are available out of the box, for everyone using dbt. +There are four generic data tests that are available out of the box, for everyone using dbt. ### `not_null` @@ -262,7 +262,7 @@ The `to` argument accepts a [Relation](/reference/dbt-classes#relation) – this ## Additional examples ### Test an expression -Some tests require multiple columns, so it doesn't make sense to nest them under the `columns:` key. In this case, you can apply the test to the model (or source, seed, or snapshot) instead: +Some data tests require multiple columns, so it doesn't make sense to nest them under the `columns:` key. In this case, you can apply the data test to the model (or source, seed, or snapshot) instead: @@ -300,7 +300,7 @@ models: Check out the guide on writing a [custom generic test](/best-practices/writing-custom-generic-tests) for more information. -### Custom test name +### Custom data test name By default, dbt will synthesize a name for your generic test by concatenating: - test name (`not_null`, `unique`, etc) @@ -434,11 +434,11 @@ $ dbt test 12:48:04 Done. PASS=2 WARN=0 ERROR=0 SKIP=0 TOTAL=2 ``` -**If using [`store_failures`](/reference/resource-configs/store_failures):** dbt uses each test's name as the name of the table in which to store any failing records. If you have defined a custom name for one test, that custom name will also be used for its table of failures. You may optionally configure an [`alias`](/reference/resource-configs/alias) for the test, to separately control both the name of the test (for metadata) and the name of its database table (for storing failures). +**If using [`store_failures`](/reference/resource-configs/store_failures):** dbt uses each data test's name as the name of the table in which to store any failing records. If you have defined a custom name for one test, that custom name will also be used for its table of failures. You may optionally configure an [`alias`](/reference/resource-configs/alias) for the test, to separately control both the name of the test (for metadata) and the name of its database table (for storing failures). ### Alternative format for defining tests -When defining a generic test with several arguments and configurations, the YAML can look and feel unwieldy. If you find it easier, you can define the same test properties as top-level keys of a single dictionary, by providing the test name as `test_name` instead. It's totally up to you. +When defining a generic data test with several arguments and configurations, the YAML can look and feel unwieldy. If you find it easier, you can define the same test properties as top-level keys of a single dictionary, by providing the test name as `test_name` instead. It's totally up to you. This example is identical to the one above: diff --git a/website/docs/reference/seed-properties.md b/website/docs/reference/seed-properties.md index 85e7be21ae1..9201df65f4c 100644 --- a/website/docs/reference/seed-properties.md +++ b/website/docs/reference/seed-properties.md @@ -18,7 +18,7 @@ seeds: show: true | false [config](/reference/resource-properties/config): [](/reference/seed-configs): - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - ... # declare additional tests columns: @@ -27,7 +27,7 @@ seeds: [meta](/reference/resource-configs/meta): {} [quote](/reference/resource-properties/quote): true | false [tags](/reference/resource-configs/tags): [] - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - ... # declare additional tests diff --git a/website/docs/reference/snapshot-properties.md b/website/docs/reference/snapshot-properties.md index 301747e9325..8f01fd8e988 100644 --- a/website/docs/reference/snapshot-properties.md +++ b/website/docs/reference/snapshot-properties.md @@ -22,7 +22,7 @@ snapshots: show: true | false [config](/reference/resource-properties/config): [](/reference/snapshot-configs): - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - ... columns: @@ -31,7 +31,7 @@ snapshots: [meta](/reference/resource-configs/meta): {} [quote](/reference/resource-properties/quote): true | false [tags](/reference/resource-configs/tags): [] - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - ... # declare additional tests - ... # declare properties of additional columns diff --git a/website/docs/reference/source-properties.md b/website/docs/reference/source-properties.md index d107881967e..aa95a19327c 100644 --- a/website/docs/reference/source-properties.md +++ b/website/docs/reference/source-properties.md @@ -57,7 +57,7 @@ sources: [meta](/reference/resource-configs/meta): {} [identifier](/reference/resource-properties/identifier): [loaded_at_field](/reference/resource-properties/freshness#loaded_at_field): - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - ... # declare additional tests [tags](/reference/resource-configs/tags): [] @@ -80,7 +80,7 @@ sources: [description](/reference/resource-properties/description): [meta](/reference/resource-configs/meta): {} [quote](/reference/resource-properties/quote): true | false - [tests](/reference/resource-properties/tests): + [tests](/reference/resource-properties/data-tests): - - ... # declare additional tests [tags](/reference/resource-configs/tags): [] diff --git a/website/docs/terms/data-wrangling.md b/website/docs/terms/data-wrangling.md index b164855ff9b..46a14a25949 100644 --- a/website/docs/terms/data-wrangling.md +++ b/website/docs/terms/data-wrangling.md @@ -150,9 +150,9 @@ For nested data types such as JSON, you’ll want to check out the JSON parsing ### Validating -dbt offers [generic tests](/docs/build/tests#more-generic-tests) in every dbt project that allows you to validate accepted, unique, and null values. They also allow you to validate the relationships between tables and that the primary key is unique. +dbt offers [generic data tests](/docs/build/data-tests#more-generic-data-tests) in every dbt project that allows you to validate accepted, unique, and null values. They also allow you to validate the relationships between tables and that the primary key is unique. -If you can’t find what you need with the generic tests, you can download an additional dbt testing package called [dbt_expectations](https://hub.getdbt.com/calogica/dbt_expectations/0.1.2/) that dives even deeper into how you can test the values in your columns. This package has useful tests like `expect_column_values_to_be_in_type_list`, `expect_column_values_to_be_between`, and `expect_column_value_lengths_to_equal`. +If you can’t find what you need with the generic tests, you can download an additional dbt testing package called [dbt_expectations](https://hub.getdbt.com/calogica/dbt_expectations/0.1.2/) that dives even deeper into how you can test the values in your columns. This package has useful data tests like `expect_column_values_to_be_in_type_list`, `expect_column_values_to_be_between`, and `expect_column_value_lengths_to_equal`. ## Conclusion diff --git a/website/docs/terms/primary-key.md b/website/docs/terms/primary-key.md index 4acd1e8c46d..fde3ff44ac7 100644 --- a/website/docs/terms/primary-key.md +++ b/website/docs/terms/primary-key.md @@ -108,7 +108,7 @@ In general for Redshift, it’s still good practice to define your primary keys ### Google BigQuery -BigQuery is pretty unique here in that it doesn’t support or enforce primary keys. If your team is on BigQuery, you’ll need to have some [pretty solid testing](/docs/build/tests) in place to ensure your primary key fields are unique and non-null. +BigQuery is pretty unique here in that it doesn’t support or enforce primary keys. If your team is on BigQuery, you’ll need to have some [pretty solid data testing](/docs/build/data-tests) in place to ensure your primary key fields are unique and non-null. ### Databricks @@ -141,7 +141,7 @@ If you don't have a field in your table that would act as a natural primary key, If your data warehouse doesn’t provide out-of-the box support and enforcement for primary keys, it’s important to clearly label and put your own constraints on primary key fields. This could look like: * **Creating a consistent naming convention for your primary keys**: You may see an `id` field or fields prefixed with `pk_` (ex. `pk_order_id`) to identify primary keys. You may also see the primary key be named as the obvious table grain (ex. In the jaffle shop’s `orders` table, the primary key is called `order_id`). -* **Adding automated [tests](/docs/build/tests) to your data models**: Use a data tool, such as dbt, to create not null and unique tests for your primary key fields. +* **Adding automated [data tests](/docs/build/data-tests) to your data models**: Use a data tool, such as dbt, to create not null and unique tests for your primary key fields. ## Testing primary keys diff --git a/website/docs/terms/surrogate-key.md b/website/docs/terms/surrogate-key.md index e57a0b74a7f..1c4d7f21d57 100644 --- a/website/docs/terms/surrogate-key.md +++ b/website/docs/terms/surrogate-key.md @@ -177,7 +177,7 @@ After executing this, the table would now have the `unique_id` field now uniquel Amazing, you just made a surrogate key! You can just move on to the next data model, right? No!! It’s critically important to test your surrogate keys for uniqueness and non-null values to ensure that the correct fields were chosen to create the surrogate key. -In order to test for null and unique values you can utilize code-based tests like [dbt tests](/docs/build/tests), that can check fields for nullness and uniqueness. You can additionally utilize simple SQL queries or unit tests to check if surrogate key count and non-nullness is correct. +In order to test for null and unique values you can utilize code-based data tests like [dbt tests](/docs/build/data-tests), that can check fields for nullness and uniqueness. You can additionally utilize simple SQL queries or unit tests to check if surrogate key count and non-nullness is correct. ## A note on hashing algorithms diff --git a/website/sidebars.js b/website/sidebars.js index ba00ba582b2..8d7be07d491 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -277,7 +277,7 @@ const sidebarSettings = { }, "docs/build/snapshots", "docs/build/seeds", - "docs/build/tests", + "docs/build/data-tests", "docs/build/jinja-macros", "docs/build/sources", "docs/build/exposures", @@ -744,7 +744,7 @@ const sidebarSettings = { "reference/resource-properties/latest_version", "reference/resource-properties/include-exclude", "reference/resource-properties/quote", - "reference/resource-properties/tests", + "reference/resource-properties/data-tests", "reference/resource-properties/versions", ], }, @@ -810,7 +810,7 @@ const sidebarSettings = { type: "category", label: "For tests", items: [ - "reference/test-configs", + "reference/data-test-configs", "reference/resource-configs/fail_calc", "reference/resource-configs/limit", "reference/resource-configs/severity", diff --git a/website/snippets/_run-result.md b/website/snippets/_run-result.md index 77a35676e86..28de3a97cb6 100644 --- a/website/snippets/_run-result.md +++ b/website/snippets/_run-result.md @@ -1,2 +1,2 @@ -- `adapter_response`: Dictionary of metadata returned from the database, which varies by adapter. For example, success `code`, number of `rows_affected`, total `bytes_processed`, and so on. Not applicable for [tests](/docs/build/tests). +- `adapter_response`: Dictionary of metadata returned from the database, which varies by adapter. For example, success `code`, number of `rows_affected`, total `bytes_processed`, and so on. Not applicable for [tests](/docs/build/data-tests). * `rows_affected` returns the number of rows modified by the last statement executed. In cases where the query's row count can't be determined or isn't applicable (such as when creating a view), a [standard value](https://peps.python.org/pep-0249/#rowcount) of `-1` is returned for `rowcount`. diff --git a/website/snippets/tutorial-add-tests-to-models.md b/website/snippets/tutorial-add-tests-to-models.md index 491fc72ba85..f743c2bf947 100644 --- a/website/snippets/tutorial-add-tests-to-models.md +++ b/website/snippets/tutorial-add-tests-to-models.md @@ -1,4 +1,4 @@ -Adding [tests](/docs/build/tests) to a project helps validate that your models are working correctly. +Adding [tests](/docs/build/data-tests) to a project helps validate that your models are working correctly. To add tests to your project: diff --git a/website/vercel.json b/website/vercel.json index 3377b49278d..5cdc2656948 100644 --- a/website/vercel.json +++ b/website/vercel.json @@ -2,6 +2,21 @@ "cleanUrls": true, "trailingSlash": false, "redirects": [ + { + "source": "/reference/test-configs", + "destination": "/reference/data-test-configs", + "permanent": true + }, + { + "source": "/reference/resource-properties/tests", + "destination": "/reference/resource-properties/data-tests", + "permanent": true + }, + { + "source": "/docs/build/tests", + "destination": "/docs/build/data-tests", + "permanent": true + }, { "source": "/docs/cloud/dbt-cloud-ide", "destination": "/docs/cloud/dbt-cloud-ide/develop-in-the-cloud",