Skip to content

Commit

Permalink
Merge branch 'current' into dbeatty/tabular-incremental-strategies
Browse files Browse the repository at this point in the history
  • Loading branch information
matthewshaver authored Dec 14, 2023
2 parents 29b9c8a + 3b5db5c commit aa9d8d3
Show file tree
Hide file tree
Showing 57 changed files with 173 additions and 158 deletions.
2 changes: 1 addition & 1 deletion website/blog/2021-11-22-dbt-labs-pr-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Checking for things like modularity and 1:1 relationships between sources and st

#### Validation of models:

This section should show something to confirm that your model is doing what you intended it to do. This could be a [dbt test](/docs/build/tests) like uniqueness or not null, or could be an ad-hoc query that you wrote to validate your data. Here is a screenshot from a test run on a local development branch:
This section should show something to confirm that your model is doing what you intended it to do. This could be a [dbt test](/docs/build/data-tests) like uniqueness or not null, or could be an ad-hoc query that you wrote to validate your data. Here is a screenshot from a test run on a local development branch:

![test validation](/img/blog/pr-template-test-validation.png "dbt test validation")

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2021-11-22-primary-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ In the days before testing your data was commonplace, you often found out that y

## How to test primary keys with dbt

Today, you can add two simple [dbt tests](/docs/build/tests) onto your primary keys and feel secure that you are going to catch the vast majority of problems in your data.
Today, you can add two simple [dbt tests](/docs/build/data-tests) onto your primary keys and feel secure that you are going to catch the vast majority of problems in your data.

Not surprisingly, these two tests correspond to the two most common errors found on your primary keys, and are usually the first tests that teams testing data with dbt implement:

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2021-11-29-dbt-airflow-spiritual-alignment.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ So instead of getting bogged down in defining roles, let’s focus on hard skill
The common skills needed for implementing any flavor of dbt (Core or Cloud) are:

* SQL: ‘nuff said
* YAML: required to generate config files for [writing tests on data models](/docs/build/tests)
* YAML: required to generate config files for [writing tests on data models](/docs/build/data-tests)
* [Jinja](/guides/using-jinja): allows you to write DRY code (using [macros](/docs/build/jinja-macros), for loops, if statements, etc)

YAML + Jinja can be learned pretty quickly, but SQL is the non-negotiable you’ll need to get started.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ The most important thing we’re introducing when your project is an infant is t

* Introduce modularity with [{{ ref() }}](/reference/dbt-jinja-functions/ref) and [{{ source() }}](/reference/dbt-jinja-functions/source)

* [Document](/docs/collaborate/documentation) and [test](/docs/build/tests) your first models
* [Document](/docs/collaborate/documentation) and [test](/docs/build/data-tests) your first models

![image alt text](/img/blog/building-a-mature-dbt-project-from-scratch/image_3.png)

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2022-04-19-complex-deduplication.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ select * from filter_real_diffs

> *What happens in this step? You check your data because you are thorough!*
Good thing dbt has already built this for you. Add a [unique test](/docs/build/tests#generic-tests) to your YAML model block for your `grain_id` in this de-duped staging model, and give it a dbt test!
Good thing dbt has already built this for you. Add a [unique test](/docs/build/data-tests#generic-data-tests) to your YAML model block for your `grain_id` in this de-duped staging model, and give it a dbt test!

```yaml
models:
Expand Down
2 changes: 1 addition & 1 deletion website/blog/2022-09-28-analyst-to-ae.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ The analyst caught the issue because they have the appropriate context to valida

An analyst is able to identify which areas do *not* need to be 100% accurate, which means they can also identify which areas *do* need to be 100% accurate.

> dbt makes it very quick to add [data quality tests](/docs/build/tests). In fact, it’s so quick, that it’ll take an analyst longer to write up what tests they want than it would take for an analyst to completely finish coding them.
> dbt makes it very quick to add [data quality tests](/docs/build/data-tests). In fact, it’s so quick, that it’ll take an analyst longer to write up what tests they want than it would take for an analyst to completely finish coding them.
When data quality issues are identified by the business, we often see that analysts are the first ones to be asked:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,9 @@ This model tries to parse the raw string value into a Python datetime. When not

#### Testing the result

During the build process, dbt will check if any of the values are null. This is using the built-in [`not_null`](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-tests) test, which will generate and execute SQL in the data platform.
During the build process, dbt will check if any of the values are null. This is using the built-in [`not_null`](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-data-tests) test, which will generate and execute SQL in the data platform.

Our initial recommendation for testing Python models is to use [generic](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-tests) and [singular](https://docs.getdbt.com/docs/building-a-dbt-project/tests#singular-tests) tests.
Our initial recommendation for testing Python models is to use [generic](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-data-tests) and [singular](https://docs.getdbt.com/docs/building-a-dbt-project/tests#singular-data-tests) tests.

```yaml
version: 2
Expand Down
2 changes: 1 addition & 1 deletion website/blog/2023-01-24-aggregating-test-failures.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ _It should be noted that this framework is for dbt v1.0+ on BigQuery. Small adap

When we talk about high quality data tests, we aren’t just referencing high quality code, but rather the informational quality of our testing framework and their corresponding error messages. Originally, we theorized that any test that cannot be acted upon is a test that should not be implemented. Later, we realized there is a time and place for tests that should receive attention at a critical mass of failures. All we needed was a higher specificity system: tests should have an explicit severity ranking associated with them, equipped to filter out the noise of common, but low concern, failures. Each test should also mesh into established [RACI](https://project-management.com/understanding-responsibility-assignment-matrix-raci-matrix/) guidelines that state which groups tackle what failures, and what constitutes a critical mass.

To ensure that tests are always acted upon, we implement tests differently depending on the user groups that must act when a test fails. This led us to have two main classes of tests — Data Integrity Tests (called [Generic Tests](https://docs.getdbt.com/docs/build/tests) in dbt docs) and Context Driven Tests (called [Singular Tests](https://docs.getdbt.com/docs/build/tests#singular-tests) in dbt docs), with varying levels of severity across both test classes.
To ensure that tests are always acted upon, we implement tests differently depending on the user groups that must act when a test fails. This led us to have two main classes of tests — Data Integrity Tests (called [Generic Tests](https://docs.getdbt.com/docs/build/tests) in dbt docs) and Context Driven Tests (called [Singular Tests](https://docs.getdbt.com/docs/build/tests#singular-data-tests) in dbt docs), with varying levels of severity across both test classes.

Data Integrity tests (Generic Tests)  are simple — they’re tests akin to a uniqueness check or not null constraint. These tests are usually actionable by the data platform team rather than subject matter experts. We define Data Integrity tests in our YAML files, similar to how they are [outlined by dbt’s documentation on generic tests](https://docs.getdbt.com/docs/build/tests). They look something like this —

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2023-07-03-data-vault-2-0-with-dbt-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ To help you get started, [we have created a template GitHub project](https://git

### Entity Relation Diagrams (ERDs) and dbt

Data lineage is dbt's strength, but sometimes it's not enough to help you to understand the relationships between Data Vault components like a classic ERD would. There are a few open source packages to visualize the entities in your Data Vault built with dbt. I recommend checking out the [dbterd](https://dbterd.datnguyen.de/1.2/index.html) which turns your [dbt relationship data quality checks](https://docs.getdbt.com/docs/build/tests#generic-tests) into an ERD.
Data lineage is dbt's strength, but sometimes it's not enough to help you to understand the relationships between Data Vault components like a classic ERD would. There are a few open source packages to visualize the entities in your Data Vault built with dbt. I recommend checking out the [dbterd](https://dbterd.datnguyen.de/1.2/index.html) which turns your [dbt relationship data quality checks](https://docs.getdbt.com/docs/build/tests#generic-data-tests) into an ERD.

## Summary

Expand Down
12 changes: 6 additions & 6 deletions website/docs/best-practices/custom-generic-tests.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
---
title: "Writing custom generic tests"
title: "Writing custom generic data tests"
id: "writing-custom-generic-tests"
description: Learn how to define your own custom generic tests.
displayText: Writing custom generic tests
hoverSnippet: Learn how to define your own custom generic tests.
description: Learn how to define your own custom generic data tests.
displayText: Writing custom generic data tests
hoverSnippet: Learn how to write your own custom generic data tests.
---

dbt ships with [Not Null](/reference/resource-properties/tests#not-null), [Unique](/reference/resource-properties/tests#unique), [Relationships](/reference/resource-properties/tests#relationships), and [Accepted Values](/reference/resource-properties/tests#accepted-values) generic tests. (These used to be called "schema tests," and you'll still see that name in some places.) Under the hood, these generic tests are defined as `test` blocks (like macros) in a globally accessible dbt project. You can find the source code for these tests in the [global project](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/include/global_project/macros/generic_test_sql).
dbt ships with [Not Null](/reference/resource-properties/data-tests#not-null), [Unique](/reference/resource-properties/data-tests#unique), [Relationships](/reference/resource-properties/data-tests#relationships), and [Accepted Values](/reference/resource-properties/data-tests#accepted-values) generic data tests. (These used to be called "schema tests," and you'll still see that name in some places.) Under the hood, these generic data tests are defined as `test` blocks (like macros) in a globally accessible dbt project. You can find the source code for these tests in the [global project](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/include/global_project/macros/generic_test_sql).

:::info
There are tons of generic tests defined in open source packages, such as [dbt-utils](https://hub.getdbt.com/dbt-labs/dbt_utils/latest/) and [dbt-expectations](https://hub.getdbt.com/calogica/dbt_expectations/latest/) — the test you're looking for might already be here!
There are tons of generic data tests defined in open source packages, such as [dbt-utils](https://hub.getdbt.com/dbt-labs/dbt_utils/latest/) and [dbt-expectations](https://hub.getdbt.com/calogica/dbt_expectations/latest/) — the test you're looking for might already be here!
:::

### Generic tests with standard arguments
Expand Down
Loading

0 comments on commit aa9d8d3

Please sign in to comment.