Skip to content

Commit

Permalink
Merge branch 'current' into content/model-config/on-configuration-change
Browse files Browse the repository at this point in the history
  • Loading branch information
mikealfare authored Dec 18, 2023
2 parents 95f2f6d + f52494a commit 80f2b60
Show file tree
Hide file tree
Showing 102 changed files with 517 additions and 1,674 deletions.
3 changes: 3 additions & 0 deletions contributing/content-style-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -479,6 +479,9 @@ Some common Latin abbreviations and other words to use instead:
| i.e. | that is | Use incremental models when your dbt runs are becoming too slow (that is, don't start with incremental models) |
| e.g. | <ul><li>for example</li><li>like</li></ul> | <ul><li>Join both the dedicated #adapter-ecosystem channel in dbt Slack and the channel for your adapter's data store (for example, #db-sqlserver and #db-athena)</li><li>Using Jinja in SQL provides a way to use control structures (like `if` statements and `for` loops) in your queries </li></ul> |
| etc. | <ul><li>and more</li><li>and so forth</li></ul> | <ul><li>A continuous integration environment running pull requests in GitHub, GitLab, and more</li><li>While reasonable defaults are provided for many such operations (like `create_schema`, `drop_schema`, `create_table`, and so forth), you might need to override one or more macros when building a new adapter</li></ul> |
| N.B. | note | Note: State-based selection is a powerful, complex feature. |

https://www.thoughtco.com/n-b-latin-abbreviations-in-english-3972787

### Prepositions

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2021-11-22-dbt-labs-pr-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Checking for things like modularity and 1:1 relationships between sources and st

#### Validation of models:

This section should show something to confirm that your model is doing what you intended it to do. This could be a [dbt test](/docs/build/tests) like uniqueness or not null, or could be an ad-hoc query that you wrote to validate your data. Here is a screenshot from a test run on a local development branch:
This section should show something to confirm that your model is doing what you intended it to do. This could be a [dbt test](/docs/build/data-tests) like uniqueness or not null, or could be an ad-hoc query that you wrote to validate your data. Here is a screenshot from a test run on a local development branch:

![test validation](/img/blog/pr-template-test-validation.png "dbt test validation")

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2021-11-22-primary-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ In the days before testing your data was commonplace, you often found out that y

## How to test primary keys with dbt

Today, you can add two simple [dbt tests](/docs/build/tests) onto your primary keys and feel secure that you are going to catch the vast majority of problems in your data.
Today, you can add two simple [dbt tests](/docs/build/data-tests) onto your primary keys and feel secure that you are going to catch the vast majority of problems in your data.

Not surprisingly, these two tests correspond to the two most common errors found on your primary keys, and are usually the first tests that teams testing data with dbt implement:

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2021-11-29-dbt-airflow-spiritual-alignment.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ So instead of getting bogged down in defining roles, let’s focus on hard skill
The common skills needed for implementing any flavor of dbt (Core or Cloud) are:

* SQL: ‘nuff said
* YAML: required to generate config files for [writing tests on data models](/docs/build/tests)
* YAML: required to generate config files for [writing tests on data models](/docs/build/data-tests)
* [Jinja](/guides/using-jinja): allows you to write DRY code (using [macros](/docs/build/jinja-macros), for loops, if statements, etc)

YAML + Jinja can be learned pretty quickly, but SQL is the non-negotiable you’ll need to get started.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ The most important thing we’re introducing when your project is an infant is t

* Introduce modularity with [{{ ref() }}](/reference/dbt-jinja-functions/ref) and [{{ source() }}](/reference/dbt-jinja-functions/source)

* [Document](/docs/collaborate/documentation) and [test](/docs/build/tests) your first models
* [Document](/docs/collaborate/documentation) and [test](/docs/build/data-tests) your first models

![image alt text](/img/blog/building-a-mature-dbt-project-from-scratch/image_3.png)

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2022-04-19-complex-deduplication.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ select * from filter_real_diffs

> *What happens in this step? You check your data because you are thorough!*
Good thing dbt has already built this for you. Add a [unique test](/docs/build/tests#generic-tests) to your YAML model block for your `grain_id` in this de-duped staging model, and give it a dbt test!
Good thing dbt has already built this for you. Add a [unique test](/docs/build/data-tests#generic-data-tests) to your YAML model block for your `grain_id` in this de-duped staging model, and give it a dbt test!

```yaml
models:
Expand Down
2 changes: 1 addition & 1 deletion website/blog/2022-09-28-analyst-to-ae.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ The analyst caught the issue because they have the appropriate context to valida

An analyst is able to identify which areas do *not* need to be 100% accurate, which means they can also identify which areas *do* need to be 100% accurate.

> dbt makes it very quick to add [data quality tests](/docs/build/tests). In fact, it’s so quick, that it’ll take an analyst longer to write up what tests they want than it would take for an analyst to completely finish coding them.
> dbt makes it very quick to add [data quality tests](/docs/build/data-tests). In fact, it’s so quick, that it’ll take an analyst longer to write up what tests they want than it would take for an analyst to completely finish coding them.
When data quality issues are identified by the business, we often see that analysts are the first ones to be asked:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,9 @@ This model tries to parse the raw string value into a Python datetime. When not

#### Testing the result

During the build process, dbt will check if any of the values are null. This is using the built-in [`not_null`](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-tests) test, which will generate and execute SQL in the data platform.
During the build process, dbt will check if any of the values are null. This is using the built-in [`not_null`](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-data-tests) test, which will generate and execute SQL in the data platform.

Our initial recommendation for testing Python models is to use [generic](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-tests) and [singular](https://docs.getdbt.com/docs/building-a-dbt-project/tests#singular-tests) tests.
Our initial recommendation for testing Python models is to use [generic](https://docs.getdbt.com/docs/building-a-dbt-project/tests#generic-data-tests) and [singular](https://docs.getdbt.com/docs/building-a-dbt-project/tests#singular-data-tests) tests.

```yaml
version: 2
Expand Down
2 changes: 1 addition & 1 deletion website/blog/2023-01-24-aggregating-test-failures.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ _It should be noted that this framework is for dbt v1.0+ on BigQuery. Small adap

When we talk about high quality data tests, we aren’t just referencing high quality code, but rather the informational quality of our testing framework and their corresponding error messages. Originally, we theorized that any test that cannot be acted upon is a test that should not be implemented. Later, we realized there is a time and place for tests that should receive attention at a critical mass of failures. All we needed was a higher specificity system: tests should have an explicit severity ranking associated with them, equipped to filter out the noise of common, but low concern, failures. Each test should also mesh into established [RACI](https://project-management.com/understanding-responsibility-assignment-matrix-raci-matrix/) guidelines that state which groups tackle what failures, and what constitutes a critical mass.

To ensure that tests are always acted upon, we implement tests differently depending on the user groups that must act when a test fails. This led us to have two main classes of tests — Data Integrity Tests (called [Generic Tests](https://docs.getdbt.com/docs/build/tests) in dbt docs) and Context Driven Tests (called [Singular Tests](https://docs.getdbt.com/docs/build/tests#singular-tests) in dbt docs), with varying levels of severity across both test classes.
To ensure that tests are always acted upon, we implement tests differently depending on the user groups that must act when a test fails. This led us to have two main classes of tests — Data Integrity Tests (called [Generic Tests](https://docs.getdbt.com/docs/build/tests) in dbt docs) and Context Driven Tests (called [Singular Tests](https://docs.getdbt.com/docs/build/tests#singular-data-tests) in dbt docs), with varying levels of severity across both test classes.

Data Integrity tests (Generic Tests)  are simple — they’re tests akin to a uniqueness check or not null constraint. These tests are usually actionable by the data platform team rather than subject matter experts. We define Data Integrity tests in our YAML files, similar to how they are [outlined by dbt’s documentation on generic tests](https://docs.getdbt.com/docs/build/tests). They look something like this —

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ This article covers an approach to handling time-varying ragged hierarchies in a

To help visualize this data, we're going to pretend we are a company that manufactures and rents out eBikes in a ride share application. When we build a bike, we keep track of the serial numbers of the components that make up the bike. Any time something breaks and needs to be replaced, we track the old parts that were removed and the new parts that were installed. We also precisely track the mileage accumulated on each of our bikes. Our primary analytical goal is to be able to report on the expected lifetime of each component, so we can prioritize improving that component and reduce costly maintenance.

<!--truncate-->

## Data model

Obviously, a real bike could have a hundred or more separate components. To keep things simple for this article, let's just consider the bike, the frame, a wheel, the wheel rim, tire, and tube. Our component hierarchy looks like:
Expand Down
2 changes: 1 addition & 1 deletion website/blog/2023-07-03-data-vault-2-0-with-dbt-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ To help you get started, [we have created a template GitHub project](https://git

### Entity Relation Diagrams (ERDs) and dbt

Data lineage is dbt's strength, but sometimes it's not enough to help you to understand the relationships between Data Vault components like a classic ERD would. There are a few open source packages to visualize the entities in your Data Vault built with dbt. I recommend checking out the [dbterd](https://dbterd.datnguyen.de/1.2/index.html) which turns your [dbt relationship data quality checks](https://docs.getdbt.com/docs/build/tests#generic-tests) into an ERD.
Data lineage is dbt's strength, but sometimes it's not enough to help you to understand the relationships between Data Vault components like a classic ERD would. There are a few open source packages to visualize the entities in your Data Vault built with dbt. I recommend checking out the [dbterd](https://dbterd.datnguyen.de/1.2/index.html) which turns your [dbt relationship data quality checks](https://docs.getdbt.com/docs/build/tests#generic-data-tests) into an ERD.

## Summary

Expand Down
8 changes: 6 additions & 2 deletions website/blog/2023-10-31-to-defer-or-to-clone.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Using the cheat sheet above, let’s explore a few common scenarios and explore
1. Make a copy of our production dataset available in our downstream BI tool
2. To safely iterate on this copy without breaking production datasets

Therefore, we should use **clone** in this scenario
Therefore, we should use **clone** in this scenario.

2. **[Slim CI](https://discourse.getdbt.com/t/how-we-sped-up-our-ci-runs-by-10x-using-slim-ci/2603)**

Expand All @@ -96,7 +96,11 @@ Using the cheat sheet above, let’s explore a few common scenarios and explore
2. Only run and test models in the CI staging environment that have changed from the production environment
3. Reference models from different environments – prod for unchanged models, and staging for modified models

Therefore, we should use **defer** in this scenario
Therefore, we should use **defer** in this scenario.

:::tip Use `dbt clone` in CI jobs to test incremental models
Learn how to [use `dbt clone` in CI jobs](/best-practices/clone-incremental-models) to efficiently test modified incremental models, simulating post-merge behavior while avoiding full-refresh costs.
:::

3. **[Blue/Green Deployments](https://discourse.getdbt.com/t/performing-a-blue-green-deploy-of-your-dbt-project-on-snowflake/1349)**

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "How we built consistent product launch metrics with the dbt Semantic Layer."
title: "How we built consistent product launch metrics with the dbt Semantic Layer"
description: "We built an end-to-end data pipeline for measuring the launch of the dbt Semantic Layer using the dbt Semantic Layer."
slug: product-analytics-pipeline-with-dbt-semantic-layer

Expand Down
12 changes: 6 additions & 6 deletions website/docs/best-practices/custom-generic-tests.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
---
title: "Writing custom generic tests"
title: "Writing custom generic data tests"
id: "writing-custom-generic-tests"
description: Learn how to define your own custom generic tests.
displayText: Writing custom generic tests
hoverSnippet: Learn how to define your own custom generic tests.
description: Learn how to define your own custom generic data tests.
displayText: Writing custom generic data tests
hoverSnippet: Learn how to write your own custom generic data tests.
---

dbt ships with [Not Null](/reference/resource-properties/tests#not-null), [Unique](/reference/resource-properties/tests#unique), [Relationships](/reference/resource-properties/tests#relationships), and [Accepted Values](/reference/resource-properties/tests#accepted-values) generic tests. (These used to be called "schema tests," and you'll still see that name in some places.) Under the hood, these generic tests are defined as `test` blocks (like macros) in a globally accessible dbt project. You can find the source code for these tests in the [global project](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/include/global_project/macros/generic_test_sql).
dbt ships with [Not Null](/reference/resource-properties/data-tests#not-null), [Unique](/reference/resource-properties/data-tests#unique), [Relationships](/reference/resource-properties/data-tests#relationships), and [Accepted Values](/reference/resource-properties/data-tests#accepted-values) generic data tests. (These used to be called "schema tests," and you'll still see that name in some places.) Under the hood, these generic data tests are defined as `test` blocks (like macros) in a globally accessible dbt project. You can find the source code for these tests in the [global project](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/include/global_project/macros/generic_test_sql).

:::info
There are tons of generic tests defined in open source packages, such as [dbt-utils](https://hub.getdbt.com/dbt-labs/dbt_utils/latest/) and [dbt-expectations](https://hub.getdbt.com/calogica/dbt_expectations/latest/) — the test you're looking for might already be here!
There are tons of generic data tests defined in open source packages, such as [dbt-utils](https://hub.getdbt.com/dbt-labs/dbt_utils/latest/) and [dbt-expectations](https://hub.getdbt.com/calogica/dbt_expectations/latest/) — the test you're looking for might already be here!
:::

### Generic tests with standard arguments
Expand Down
Loading

0 comments on commit 80f2b60

Please sign in to comment.