diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md
index 7803e6851ec..63e6ee17a31 100644
--- a/website/docs/docs/build/packages.md
+++ b/website/docs/docs/build/packages.md
@@ -161,7 +161,7 @@ Where `name: 'dbt_utils'` specifies the subfolder of `dbt_packages` that's creat
### Native private packages
-dbt Cloud supports private packages from [supported](#prerequisites) Git repos leveraging an exisiting [configuration](/docs/cloud/git/git-configuration-in-dbt-cloud) in your environment. Previously, you had to configure a [token](#git-token-method) to retrieve packages from your private repos.
+dbt Cloud supports private packages from [supported](#prerequisites) Git repos leveraging an existing [configuration](/docs/cloud/git/git-configuration-in-dbt-cloud) in your environment. Previously, you had to configure a [token](#git-token-method) to retrieve packages from your private repos.
#### Prerequisites
diff --git a/website/docs/docs/cloud-integrations/about-snowflake-native-app.md b/website/docs/docs/cloud-integrations/about-snowflake-native-app.md
index 86ee6a7d630..9eb1179897e 100644
--- a/website/docs/docs/cloud-integrations/about-snowflake-native-app.md
+++ b/website/docs/docs/cloud-integrations/about-snowflake-native-app.md
@@ -46,7 +46,7 @@ App users are able to access all information that's available to the API service
## Procurement
The dbt Snowflake Native App is available on the [Snowflake Marketplace](https://app.snowflake.com/marketplace/listing/GZTYZSRT2R3). Purchasing it includes access to the Native App and a dbt Cloud account that's on the Enterprise plan. Existing dbt Cloud Enterprise customers can also access it. If interested, contact your Enterprise account manager.
-If you're interested, please [contact us](matilto:sales_snowflake_marketplace@dbtlabs.com) for more information.
+If you're interested, please [contact us](mailto:sales_snowflake_marketplace@dbtlabs.com) for more information.
## Support
If you have any questions about the dbt Snowflake Native App, you may [contact our Support team](mailto:dbt-snowflake-marketplace@dbtlabs.com) for help. Please provide information about your installation of the Native App, including your dbt Cloud account ID and Snowflake account identifier.
diff --git a/website/docs/docs/cloud/git/setup-azure.md b/website/docs/docs/cloud/git/setup-azure.md
index 273660ba3dd..c6213b49453 100644
--- a/website/docs/docs/cloud/git/setup-azure.md
+++ b/website/docs/docs/cloud/git/setup-azure.md
@@ -155,7 +155,7 @@ The service user's permissions will also power which repositories a team can sel
While it's common to enforce multi-factor authentication (MFA) for normal user accounts, service user authentication must not need an extra factor. If you enable a second factor for the service user, this can interrupt production runs and cause a failure to clone the repository. In order for the OAuth access token to work, the best practice is to remove any more burden of proof of identity for service users.
-As a result, MFA must be explicity disabled in the Office 365 or Microsoft Entra ID administration panel for the service user. Just having it "un-connected" will not be sufficient, as dbt Cloud will be prompted to set up MFA instead of allowing the credentials to be used as intended.
+As a result, MFA must be explicitly disabled in the Office 365 or Microsoft Entra ID administration panel for the service user. Just having it "un-connected" will not be sufficient, as dbt Cloud will be prompted to set up MFA instead of allowing the credentials to be used as intended.
**To disable MFA for a single user using the Office 365 Administration console:**
diff --git a/website/docs/docs/cloud/manage-access/external-oauth.md b/website/docs/docs/cloud/manage-access/external-oauth.md
index 380d0a3d1cc..c25b44d1513 100644
--- a/website/docs/docs/cloud/manage-access/external-oauth.md
+++ b/website/docs/docs/cloud/manage-access/external-oauth.md
@@ -144,7 +144,7 @@ Adjust the other settings as needed to meet your organization's configurations i
1. Navigate back to the dbt Cloud **Account settings** —> **Integrations** page you were on at the beginning. It’s time to start filling out all of the fields.
1. `Integration name`: Give the integration a descriptive name that includes identifying information about the Okta environment so future users won’t have to guess where it belongs.
2. `Client ID` and `Client secrets`: Retrieve these from the Okta application page.
-
+
3. Authorize URL and Token URL: Found in the metadata URI.
diff --git a/website/docs/docs/cloud/manage-access/invite-users.md b/website/docs/docs/cloud/manage-access/invite-users.md
index 0922b4dc991..b9a12bae7c6 100644
--- a/website/docs/docs/cloud/manage-access/invite-users.md
+++ b/website/docs/docs/cloud/manage-access/invite-users.md
@@ -66,7 +66,7 @@ Once the user completes this process, their email and user information will popu
* Is there a limit to the number of users I can invite? _Your ability to invite users is limited to the number of licenses you have available._
* Why are users are clicking the invitation link and getting an `Invalid Invitation Code` error? _We have seen scenarios where embedded secure link technology (such as enterprise Outlooks [Safe Link](https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/safe-links-about?view=o365-worldwide) feature) can result in errors when clicking on the email link. Be sure to include the `getdbt.com` URL in the allowlists for these services._
-* Can I have a mixure of users with SSO and username/password authentication? _Once SSO is enabled, you will no longer be able to add local users. If you have contractors or similar contingent workers, we recommend you add them to your SSO service._
+* Can I have a mixture of users with SSO and username/password authentication? _Once SSO is enabled, you will no longer be able to add local users. If you have contractors or similar contingent workers, we recommend you add them to your SSO service._
* What happens if I need to resend the invitation? _From the Users page, click on the invite record, and you will be presented with the option to resend the invitation._
* What can I do if I entered an email address incorrectly? _From the Users page, click on the invite record, and you will be presented with the option to revoke it. Once revoked, generate a new invitation to the correct email address._
diff --git a/website/docs/docs/cloud/manage-access/mfa.md b/website/docs/docs/cloud/manage-access/mfa.md
index bcddc04f072..644fcdb32c2 100644
--- a/website/docs/docs/cloud/manage-access/mfa.md
+++ b/website/docs/docs/cloud/manage-access/mfa.md
@@ -58,7 +58,7 @@ Choose the next steps based on your preferred enrollment selection:
2. Follow the instructions in the modal window and click **Use security key**.
-
+
3. Scan the QR code or insert and touch activate your USB key to begin the process. Follow the on-screen prompts.
diff --git a/website/docs/docs/collaborate/data-tile.md b/website/docs/docs/collaborate/data-tile.md
index 0edd9d7c44e..077a4f5a740 100644
--- a/website/docs/docs/collaborate/data-tile.md
+++ b/website/docs/docs/collaborate/data-tile.md
@@ -63,7 +63,7 @@ Follow these steps to set up your data health tile:
6. Navigate back to dbt Explorer and select an exposure.
7. Below the **Data health** section, expand on the toggle for instructions on how to embed the exposure tile (if you're an account admin with develop permissions).
8. In the expanded toggle, you'll see a text field where you can paste your **Metadata Only token**.
-
+
9. Once you’ve pasted your token, you can select either **URL** or **iFrame** depending on which you need to add to your dashboard.
diff --git a/website/docs/docs/collaborate/explore-multiple-projects.md b/website/docs/docs/collaborate/explore-multiple-projects.md
index b15e133a49e..3a0cce8a9e6 100644
--- a/website/docs/docs/collaborate/explore-multiple-projects.md
+++ b/website/docs/docs/collaborate/explore-multiple-projects.md
@@ -27,7 +27,7 @@ When viewing a downstream (child) project that imports and refs public models fr
- Clicking on a model opens a side panel containing general information about the model, such as the specific dbt Cloud project that produces that model, description, package, and more.
- Double-clicking on a model from another project opens the resource-level lineage graph of the parent project, if you have the permissions to do so.
-
+
## Explore the project-level lineage graph
diff --git a/website/docs/docs/collaborate/govern/model-versions.md b/website/docs/docs/collaborate/govern/model-versions.md
index 0bd16a03b3a..35bb7e047c8 100644
--- a/website/docs/docs/collaborate/govern/model-versions.md
+++ b/website/docs/docs/collaborate/govern/model-versions.md
@@ -14,7 +14,7 @@ This functionality is new in v1.5 — if you have thoughts, participate in [the
-import VersionsCallout from '/snippets/_version-callout.md';
+import VersionsCallout from '/snippets/_model-version-callout.md';
diff --git a/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md b/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md
index 0a0347df9ea..0c22209d75c 100644
--- a/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md
+++ b/website/docs/docs/core/connect-data-platform/azuresynapse-setup.md
@@ -55,7 +55,7 @@ Microsoft made several changes related to connection encryption. Read more about
### Authentication methods
This adapter is based on the adapter for Microsoft SQL Server.
-Therefor, the same authentication methods are supported.
+Therefore, the same authentication methods are supported.
The configuration is the same except for 1 major difference:
instead of specifying `type: sqlserver`, you specify `type: synapse`.
diff --git a/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md b/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md
index 692342466b0..c9c91d3ef5b 100644
--- a/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md
+++ b/website/docs/docs/core/connect-data-platform/ibmdb2-setup.md
@@ -65,7 +65,7 @@ your_profile_name:
| type | The specific adapter to use | Required | `ibmdb2` |
| schema | Specify the schema (database) to build models into | Required | `analytics` |
| database | Specify the database you want to connect to | Required | `testdb` |
-| host | Hostname or IP-adress | Required | `localhost` |
+| host | Hostname or IP-address | Required | `localhost` |
| port | The port to use | Optional | `50000` |
| protocol | Protocol to use | Optional | `TCPIP` |
| username | The username to use to connect to the server | Required | `my-username` |
diff --git a/website/docs/docs/core/connect-data-platform/layer-setup.md b/website/docs/docs/core/connect-data-platform/layer-setup.md
index 051094297a2..9514d6bb9e6 100644
--- a/website/docs/docs/core/connect-data-platform/layer-setup.md
+++ b/website/docs/docs/core/connect-data-platform/layer-setup.md
@@ -83,7 +83,7 @@ _Parameters:_
| Syntax | Description |
| --------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `MODEL_TYPE` | Type of the model your want to train. There are two options:
- `classifier`: A model to predict classes/labels or categories such as spam detection
- `regressor`: A model to predict continious outcomes such as CLV prediction. |
+| `MODEL_TYPE` | Type of the model your want to train. There are two options:
- `classifier`: A model to predict classes/labels or categories such as spam detection
- `regressor`: A model to predict continuous outcomes such as CLV prediction. |
| `FEATURES` | Input column names as a list to train your AutoML model. |
| `TARGET` | Target column that you want to predict. |
diff --git a/website/docs/docs/core/connect-data-platform/postgres-setup.md b/website/docs/docs/core/connect-data-platform/postgres-setup.md
index b6f34a00e0b..ef6b42d6236 100644
--- a/website/docs/docs/core/connect-data-platform/postgres-setup.md
+++ b/website/docs/docs/core/connect-data-platform/postgres-setup.md
@@ -68,7 +68,7 @@ The `role` config controls the Postgres role that dbt assumes when opening new c
#### sslmode
-The `sslmode` config controls how dbt connectes to Postgres databases using SSL. See [the Postgres docs](https://www.postgresql.org/docs/9.1/libpq-ssl.html) on `sslmode` for usage information. When unset, dbt will connect to databases using the Postgres default, `prefer`, as the `sslmode`.
+The `sslmode` config controls how dbt connects to Postgres databases using SSL. See [the Postgres docs](https://www.postgresql.org/docs/9.1/libpq-ssl.html) on `sslmode` for usage information. When unset, dbt will connect to databases using the Postgres default, `prefer`, as the `sslmode`.
#### sslcert
@@ -99,7 +99,7 @@ If `dbt-postgres` encounters an operational error or timeout when opening a new
`psycopg2-binary` is installed by default when installing `dbt-postgres`.
Installing `psycopg2-binary` uses a pre-built version of `psycopg2` which may not be optimized for your particular machine.
This is ideal for development and testing workflows where performance is less of a concern and speed and ease of install is more important.
-However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system and archtecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus.
+However, production environments will benefit from a version of `psycopg2` which is built from source for your particular operating system and architecture. In this scenario, speed and ease of install is less important as the on-going usage is the focus.
diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md
index 611642e91b7..97bba29e66e 100644
--- a/website/docs/docs/core/connect-data-platform/spark-setup.md
+++ b/website/docs/docs/core/connect-data-platform/spark-setup.md
@@ -25,7 +25,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md';
-If connecting to Databricks via ODBC driver, it requires `pyodbc`. Depending on your system, you can install it seperately or via pip. See the [`pyodbc` wiki](https://github.com/mkleehammer/pyodbc/wiki/Install) for OS-specific installation details.
+If connecting to Databricks via ODBC driver, it requires `pyodbc`. Depending on your system, you can install it separately or via pip. See the [`pyodbc` wiki](https://github.com/mkleehammer/pyodbc/wiki/Install) for OS-specific installation details.
If connecting to a Spark cluster via the generic thrift or http methods, it requires `PyHive`.
diff --git a/website/docs/docs/core/connect-data-platform/upsolver-setup.md b/website/docs/docs/core/connect-data-platform/upsolver-setup.md
index 8e4203e0b0c..164d46ee8af 100644
--- a/website/docs/docs/core/connect-data-platform/upsolver-setup.md
+++ b/website/docs/docs/core/connect-data-platform/upsolver-setup.md
@@ -10,7 +10,7 @@ meta:
min_core_version: 'v1.5.0'
cloud_support: Not Supported
min_supported_version: 'n/a'
- slack_channel_name: 'Upsolver Comunity'
+ slack_channel_name: 'Upsolver Community'
slack_channel_link: 'https://join.slack.com/t/upsolvercommunity/shared_invite/zt-1zo1dbyys-hj28WfaZvMh4Z4Id3OkkhA'
platform_name: 'Upsolver'
config_page: '/reference/resource-configs/upsolver-configs'
diff --git a/website/docs/docs/core/pip-install.md b/website/docs/docs/core/pip-install.md
index 6d94d92a64b..fa16ca13536 100644
--- a/website/docs/docs/core/pip-install.md
+++ b/website/docs/docs/core/pip-install.md
@@ -29,9 +29,9 @@ dbt-env\Scripts\activate # activate the environment for Windows
#### Create an alias
-To activate your dbt environment with every new shell window or session, you can create an alias for the source command in your $HOME/.bashrc, $HOME/.zshrc, or whichever config file your shell draws from.
+To activate your dbt environment with every new shell window or session, you can create an alias for the source command in your `$HOME/.bashrc`, `$HOME/.zshrc`, or whichever config file your shell draws from.
-For example, add the following to your rc file, replacing <PATH_TO_VIRTUAL_ENV_CONFIG> with the path to your virtual environment configuration.
+For example, add the following to your rc file, replacing `` with the path to your virtual environment configuration.
```shell
alias env_dbt='source /bin/activate'
diff --git a/website/docs/docs/dbt-cloud-apis/authentication.md b/website/docs/docs/dbt-cloud-apis/authentication.md
index 43a08d84fd7..e817512c1fc 100644
--- a/website/docs/docs/dbt-cloud-apis/authentication.md
+++ b/website/docs/docs/dbt-cloud-apis/authentication.md
@@ -31,7 +31,7 @@ pagination_prev: null
You should use service tokens broadly for any production workflow where you need a service account. You should use PATs only for developmental workflows _or_ dbt Cloud client workflows that require user context. The following examples show you when to use a personal access token (PAT) or a service token:
-* **Connecting a partner integration to dbt Cloud** — Some examples include the [dbt Semantic Layer Google Sheets integration](/docs/cloud-integrations/avail-sl-integrations), Hightouch, Datafold, a custom app you’ve created, etc. These types of integrations should use a service token instead of a PAT because service tokens give you visibility, and you can scope them to only what the integration needs and ensure the least privilege. We highly recommend switching to a service token if you’re using a personal acess token for these integrations today.
+* **Connecting a partner integration to dbt Cloud** — Some examples include the [dbt Semantic Layer Google Sheets integration](/docs/cloud-integrations/avail-sl-integrations), Hightouch, Datafold, a custom app you’ve created, etc. These types of integrations should use a service token instead of a PAT because service tokens give you visibility, and you can scope them to only what the integration needs and ensure the least privilege. We highly recommend switching to a service token if you’re using a personal access token for these integrations today.
* **Production Terraform** — Use a service token since this is a production workflow and is acting as a service account and not a user account.
* **Cloud CLI** — Use a PAT since the dbt Cloud CLI works within the context of a user (the user is making the requests and has to operate within the context of their user account).
* **Testing a custom script and staging Terraform or Postman** — We recommend using a PAT as this is a developmental workflow and is scoped to the user making the changes. When you push this script or Terraform into production, use a service token instead.
diff --git a/website/docs/docs/dbt-versions/2022-release-notes.md b/website/docs/docs/dbt-versions/2022-release-notes.md
index b46c259a6d8..f180f664372 100644
--- a/website/docs/docs/dbt-versions/2022-release-notes.md
+++ b/website/docs/docs/dbt-versions/2022-release-notes.md
@@ -51,7 +51,7 @@ packages:
-## Novemver 2022
+## November 2022
### The dbt Cloud + Databricks experience is getting even better
@@ -241,4 +241,4 @@ We started the new year with a gift! Multi-tenant Team and Enterprise accounts c
#### Performance improvements and enhancements
-* We added client-side naming validation for file or folder creation.
\ No newline at end of file
+* We added client-side naming validation for file or folder creation.
diff --git a/website/docs/docs/dbt-versions/2023-release-notes.md b/website/docs/docs/dbt-versions/2023-release-notes.md
index ec635a051dc..4dd10c36b5c 100644
--- a/website/docs/docs/dbt-versions/2023-release-notes.md
+++ b/website/docs/docs/dbt-versions/2023-release-notes.md
@@ -35,7 +35,7 @@ Archived release notes for dbt Cloud from 2023
To learn more, refer to [Extended attributes](/docs/dbt-cloud-environments#extended-attributes).
- The **Extended Atrributes** text box is available from your environment's settings page:
+ The **Extended Attributes** text box is available from your environment's settings page:
@@ -183,7 +183,7 @@ Archived release notes for dbt Cloud from 2023
Previously in dbt Cloud, you could only rerun an errored job from start but now you can also rerun it from its point of failure.
- You can view which job failed to complete successully, which command failed in the run step, and choose how to rerun it. To learn more, refer to [Retry jobs](/docs/deploy/retry-jobs).
+ You can view which job failed to complete successfully, which command failed in the run step, and choose how to rerun it. To learn more, refer to [Retry jobs](/docs/deploy/retry-jobs).
@@ -812,7 +812,7 @@ Archived release notes for dbt Cloud from 2023
--
+-
The dbt Cloud Scheduler now prevents queue clog by canceling unnecessary runs of over-scheduled jobs.
diff --git a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
index 9a4712af528..2a4a9d96528 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md
@@ -92,7 +92,7 @@ You can read more about each of these behavior changes in the following links:
- (Introduced, disabled by default) [`skip_nodes_if_on_run_start_fails` project config flag](/reference/global-configs/behavior-changes#behavior-change-flags). If the flag is set and **any** `on-run-start` hook fails, mark all selected nodes as skipped.
- `on-run-start/end` hooks are **always** run, regardless of whether they passed or failed last time.
- (Introduced, disabled by default) [[Redshift] `restrict_direct_pg_catalog_access`](/reference/global-configs/behavior-changes#redshift-restrict_direct_pg_catalog_access). If the flag is set the adapter will use the Redshift API (through the Python client) if available, or query Redshift's `information_schema` tables instead of using `pg_` tables.
-- (Introduced, disabled by default) [`require_nested_cumulative_type_params`](/reference/global-configs/behavior-changes#cumulative-metrics). If the flag is set to `True`, users will receive an error instead of a warning if they're not proprly formatting cumulative metrics using the new [`cumulative_type_params`](/docs/build/cumulative#parameters) nesting.
+- (Introduced, disabled by default) [`require_nested_cumulative_type_params`](/reference/global-configs/behavior-changes#cumulative-metrics). If the flag is set to `True`, users will receive an error instead of a warning if they're not properly formatting cumulative metrics using the new [`cumulative_type_params`](/docs/build/cumulative#parameters) nesting.
- (Introduced, disabled by default) [`require_batched_execution_for_custom_microbatch_strategy`](/reference/global-configs/behavior-changes#custom-microbatch-strategy). Set to `True` if you use a custom microbatch macro to enable batched execution. If you don't have a custom microbatch macro, you don't need to set this flag as dbt will handle microbatching automatically for any model using the microbatch strategy.
## Adapter specific features and functionalities
diff --git a/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md b/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md
index 6139cdcfc6f..11c78bd4bfa 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/10-upgrading-to-v1.5.md
@@ -110,7 +110,7 @@ The built-in [collect_freshness](https://github.com/dbt-labs/dbt-core/blob/1.5.l
{{ return(load_result('collect_freshness')) }}
```
-Finally: The [built-in `generate_alias_name` macro](https://github.com/dbt-labs/dbt-core/blob/1.5.latest/core/dbt/include/global_project/macros/get_custom_name/get_custom_alias.sql) now includes logic to handle versioned models. If your project has reimplemented the `generate_alias_name` macro with custom logic, and you want to start using [model versions](/docs/collaborate/govern/model-versions), you will need to update the logic in your macro. Note that, while this is **not** a prerequisite for upgrading to v1.5—only for using the new feature—we recommmend that you do this during your upgrade, whether you're planning to use model versions tomorrow or far in the future.
+Finally: The [built-in `generate_alias_name` macro](https://github.com/dbt-labs/dbt-core/blob/1.5.latest/core/dbt/include/global_project/macros/get_custom_name/get_custom_alias.sql) now includes logic to handle versioned models. If your project has reimplemented the `generate_alias_name` macro with custom logic, and you want to start using [model versions](/docs/collaborate/govern/model-versions), you will need to update the logic in your macro. Note that, while this is **not** a prerequisite for upgrading to v1.5—only for using the new feature—we recommend that you do this during your upgrade, whether you're planning to use model versions tomorrow or far in the future.
Likewise, if your project has reimplemented the `ref` macro with custom logic, you will need to update the logic in your macro as described [here](https://docs.getdbt.com/reference/dbt-jinja-functions/builtins).
diff --git a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md
index d6fc6f9f49a..d610cdb4455 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-16-0.md
@@ -80,7 +80,7 @@ The `snowflake__list_schemas` macro should now return an Agate dataframe with a
column named `"name"`. If you are overriding the `snowflake__list_schemas` macro in your
project, you can find more information about this change in [this pull request](https://github.com/dbt-labs/dbt-core/pull/2171).
-### Snowflake databases wih 10,000 schemas
+### Snowflake databases with 10,000 schemas
dbt no longer supports running against Snowflake databases containing more than
10,000 schemas. This is due limitations of the `show schemas in database` query
that dbt now uses to find schemas in a Snowflake database. If your dbt project
diff --git a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md
index 6a19bdcf808..00d6a70bd05 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/11-Older versions/upgrading-to-0-17-0.md
@@ -237,7 +237,7 @@ modules, please be mindful of the following changes to dbt's Python
dependencies:
Core:
-- Pinned `Jinja2` depdendency to `2.11.2`
+- Pinned `Jinja2` dependency to `2.11.2`
- Pinned `hologram` to `0.0.7`
- Require Python >= `3.6.3`
diff --git a/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md b/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md
index 996229807a1..f4ea44c6b95 100644
--- a/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md
+++ b/website/docs/docs/dbt-versions/release-notes/98-dbt-cloud-changelog-2021.md
@@ -326,7 +326,7 @@ Rolling out a few long-term bets to ensure that our beloved dbt Cloud does not f
- Fix NoSuchKey error
- Guarantee unique notification settings per account, user, and type
- Fix for account notification settings
-- Dont show deleted projects on notifications page
+- Don't show deleted projects on notifications page
- Fix unicode error while decoding last_chunk
- Show more relevant errors to customers
- Groups are now editable by non-sudo requests
diff --git a/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md b/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md
index a6b68cf9d51..32a33d95301 100644
--- a/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md
+++ b/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md
@@ -464,7 +464,7 @@ This release adds a new version of dbt (0.16.1), fixes a number of IDE bugs, and
- Fixed issue preventing temporary PR schemas from being dropped when PR is closed.
- Fix issues with IDE tabs not updating query compile and run results.
- Fix issues with query runtime timer in IDE for compile and run query functions.
-- Fixed what settings are displayed on the account settings page to allign with the user's permissions.
+- Fixed what settings are displayed on the account settings page to align with the user's permissions.
- Fixed bug with checking user's permissions in frontend when user belonged to more than one project.
- Fixed bug with access control around environments and file system/git interactions that occurred when using IDE.
- Fixed a bug with Environments too generously matching repository.
diff --git a/website/docs/docs/deploy/merge-jobs.md b/website/docs/docs/deploy/merge-jobs.md
index a187e3992f8..e148498ed01 100644
--- a/website/docs/docs/deploy/merge-jobs.md
+++ b/website/docs/docs/deploy/merge-jobs.md
@@ -20,7 +20,7 @@ By using CD in dbt Cloud, you can take advantage of deferral to build only the e
1. On your deployment environment page, click **Create job** > **Merge job**.
1. Options in the **Job settings** section:
- **Job name** — Specify the name for the merge job.
- - **Description** — Provide a descripion about the job.
+ - **Description** — Provide a description about the job.
- **Environment** — By default, it’s set to the environment you created the job from.
1. In the **Git trigger** section, the **Run on merge** option is enabled by default. Every time a PR merges (to a base
branch configured in the environment) in your Git repo, this job will get triggered to run.
diff --git a/website/docs/docs/deploy/webhooks.md b/website/docs/docs/deploy/webhooks.md
index 52ce2a1fe56..4ff9c350344 100644
--- a/website/docs/docs/deploy/webhooks.md
+++ b/website/docs/docs/deploy/webhooks.md
@@ -217,7 +217,7 @@ GET https://{your access URL}/api/v3/accounts/{account_id}/webhooks/subscription
{
"id": "wsu_12345abcde",
"account_identifier": "act_12345abcde",
- "name": "Notication Webhook",
+ "name": "Notification Webhook",
"description": "Webhook used to trigger notifications in Slack",
"job_ids": [],
"event_types": [
diff --git a/website/docs/reference/project-configs/version.md b/website/docs/reference/project-configs/version.md
index 890ad8542a7..54df6bfcb31 100644
--- a/website/docs/reference/project-configs/version.md
+++ b/website/docs/reference/project-configs/version.md
@@ -4,7 +4,7 @@ required: True
keyword: project version, project versioning, dbt project versioning
---
-import VersionsCallout from '/snippets/_version-callout.md';
+import VersionsCallout from '/snippets/_model-version-callout.md';
diff --git a/website/docs/reference/resource-configs/alias.md b/website/docs/reference/resource-configs/alias.md
index 9b367f7b48a..16a8a392e06 100644
--- a/website/docs/reference/resource-configs/alias.md
+++ b/website/docs/reference/resource-configs/alias.md
@@ -101,7 +101,7 @@ seeds:
-Configure a snapshots's alias in your `dbt_project.yml` file or config block.
+Configure a snapshots's alias in your `dbt_project.yml` file, `snapshots/snapshot_name.yml` file, or config block.
The following examples demonstrate how to `alias` a snapshot named `your_snapshot` to `the_best_snapshot`.
@@ -117,18 +117,17 @@ snapshots:
```
-In the `snapshots/properties.yml` file:
+In the `snapshots/snapshot_name.yml` file:
-
+
```yml
version: 2
snapshots:
- - name: your_snapshot
+ - name: your_snapshot_name
config:
alias: the_best_snapshot
-```
In `snapshots/your_snapshot.sql` file:
diff --git a/website/docs/reference/resource-configs/contract.md b/website/docs/reference/resource-configs/contract.md
index 18266ec672f..bd1fceb4e9b 100644
--- a/website/docs/reference/resource-configs/contract.md
+++ b/website/docs/reference/resource-configs/contract.md
@@ -6,8 +6,6 @@ default_value: {enforced: false}
id: "contract"
---
-Supported in dbt v1.5 and higher.
-
When the `contract` configuration is enforced, dbt will ensure that your model's returned dataset exactly matches the attributes you have defined in yaml:
- `name` and `data_type` for every column
- Additional [`constraints`](/reference/resource-properties/constraints), as supported for this materialization and data platform
diff --git a/website/docs/reference/resource-configs/database.md b/website/docs/reference/resource-configs/database.md
index 6c57e7e2c69..16742b3f597 100644
--- a/website/docs/reference/resource-configs/database.md
+++ b/website/docs/reference/resource-configs/database.md
@@ -22,6 +22,7 @@ models:
```
+
This would result in the generated relation being located in the `reporting` database, so the full relation name would be `reporting.finance.sales_metrics` instead of the default target database.
@@ -55,7 +56,7 @@ Available for dbt Cloud release tracks or dbt Core v1.9+. Select v1.9 or newer f
-Specify a custom database for a snapshot in your `dbt_project.yml` or config file.
+Specify a custom database for a snapshot in your `dbt_project.yml`, snapshot.yml file, or config file.
For example, if you have a snapshot that you want to load into a database other than the target database, you can configure it like this:
@@ -69,6 +70,20 @@ snapshots:
```
+Or in a `snapshot_name.yml` file:
+
+
+
+```yaml
+version: 2
+
+snapshots:
+ - name: snapshot_name
+ [config](/reference/resource-properties/config):
+ database: snapshots
+```
+
+
This results in the generated relation being located in the `snapshots` database so the full relation name would be `snapshots.finance.your_snapshot` instead of the default target database.
diff --git a/website/docs/reference/resource-configs/dbt_valid_to_current.md b/website/docs/reference/resource-configs/dbt_valid_to_current.md
index 2a6cf3abe6d..9cf2ca0860e 100644
--- a/website/docs/reference/resource-configs/dbt_valid_to_current.md
+++ b/website/docs/reference/resource-configs/dbt_valid_to_current.md
@@ -6,7 +6,7 @@ default_value: {NULL}
id: "dbt_valid_to_current"
---
-Available from dbt v1.9 or with [the dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks) dbt Cloud.
+
diff --git a/website/docs/reference/resource-configs/enabled.md b/website/docs/reference/resource-configs/enabled.md
index b74d7250907..faee6654b22 100644
--- a/website/docs/reference/resource-configs/enabled.md
+++ b/website/docs/reference/resource-configs/enabled.md
@@ -78,9 +78,28 @@ snapshots:
+
+
+
+
+```yaml
+version: 2
+
+snapshots:
+ - name: snapshot_name
+ [config](/reference/resource-properties/config):
+ enabled: true | false
+```
+
+
+
+
+
```sql
+# Configuring in a SQL file is a legacy method and not recommended. Use the YAML file instead.
+
{% snapshot [snapshot_name](snapshot_name) %}
{{ config(
@@ -90,11 +109,10 @@ snapshots:
select ...
{% endsnapshot %}
-
```
-
+
diff --git a/website/docs/reference/resource-configs/group.md b/website/docs/reference/resource-configs/group.md
index cd0ad2683f5..5ea701b3b63 100644
--- a/website/docs/reference/resource-configs/group.md
+++ b/website/docs/reference/resource-configs/group.md
@@ -96,6 +96,21 @@ snapshots:
+
+
+
+```yaml
+version: 2
+
+snapshots:
+ - name: snapshot_name
+ [config](/reference/resource-properties/config):
+ group: GROUP_NAME
+```
+
+
+
+
```sql
diff --git a/website/docs/reference/resource-configs/persist_docs.md b/website/docs/reference/resource-configs/persist_docs.md
index d4a90027771..68a23274b4b 100644
--- a/website/docs/reference/resource-configs/persist_docs.md
+++ b/website/docs/reference/resource-configs/persist_docs.md
@@ -84,6 +84,23 @@ snapshots:
+
+
+
+```yaml
+version: 2
+
+snapshots:
+ - name: snapshot_name
+ [config](/reference/resource-properties/config):
+ persist_docs:
+ relation: true
+ columns: true
+```
+
+
+
+
```sql
diff --git a/website/docs/reference/resource-configs/schema.md b/website/docs/reference/resource-configs/schema.md
index 6f56215de61..1b5a2d83c45 100644
--- a/website/docs/reference/resource-configs/schema.md
+++ b/website/docs/reference/resource-configs/schema.md
@@ -22,13 +22,14 @@ models:
```
+
This would result in the generated relations for these models being located in the `marketing` schema, so the full relation names would be `analytics.target_schema_marketing.model_name`. This is because the schema of the relation is `{{ target.schema }}_{{ schema }}`. The [definition](#definition) section explains this in more detail.
-Configure a custom schema in your `dbt_project.yml` file.
+Configure a [custom schema](/docs/build/custom-schemas#understanding-custom-schemas) in your `dbt_project.yml` file.
For example, if you have a seed that should be placed in a separate schema called `mappings`, you can configure it like this:
@@ -50,16 +51,18 @@ This would result in the generated relation being located in the `mappings` sche
-Available in dbt Core v1.9+. Select v1.9 or newer from the version dropdown to view the configs. Try it now in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks).
+Available in dbt Core v1.9 and higher. Select v1.9 or newer from the version dropdown to view the configs. Try it now in the [dbt Cloud "Latest" release track](/docs/dbt-versions/cloud-release-tracks).
-Specify a custom schema for a snapshot in your `dbt_project.yml` or config file.
+Specify a [custom schema](/docs/build/custom-schemas#understanding-custom-schemas) for a snapshot in your `dbt_project.yml` or YAML file.
For example, if you have a snapshot that you want to load into a schema other than the target schema, you can configure it like this:
+In a `dbt_project.yml` file:
+
```yml
@@ -70,6 +73,21 @@ snapshots:
```
+In a `snapshots/snapshot_name.yml` file:
+
+
+
+```yaml
+version: 2
+
+snapshots:
+ - name: snapshot_name
+ [config](/reference/resource-properties/config):
+ schema: snapshots
+```
+
+
+
This results in the generated relation being located in the `snapshots` schema so the full relation name would be `analytics.snapshots.your_snapshot` instead of the default target schema.
@@ -78,20 +96,25 @@ This results in the generated relation being located in the `snapshots` schema s
+Specify a [custom schema](/docs/build/custom-schemas#understanding-custom-schemas) for a [saved query](/docs/build/saved-queries#parameters) in your `dbt_project.yml` or YAML file.
+
```yml
saved-queries:
+schema: metrics
```
+
+This would result in the saved query being stored in the `metrics` schema.
+
+
-Customize the schema for storing test results in your `dbt_project.yml` file.
+Customize a [custom schema](/docs/build/custom-schemas#understanding-custom-schemas) for storing test results in your `dbt_project.yml` file.
For example, to save test results in a specific schema, you can configure it like this:
-
```yml
diff --git a/website/docs/reference/resource-properties/versions.md b/website/docs/reference/resource-properties/versions.md
index 748aa477a4f..d2cb4a1f116 100644
--- a/website/docs/reference/resource-properties/versions.md
+++ b/website/docs/reference/resource-properties/versions.md
@@ -5,7 +5,7 @@ required: no
keyword: governance, model version, model versioning, dbt model versioning
---
-import VersionsCallout from '/snippets/_version-callout.md';
+import VersionsCallout from '/snippets/_model-version-callout.md';
diff --git a/website/snippets/_version-callout.md b/website/snippets/_model-version-callout.md
similarity index 100%
rename from website/snippets/_version-callout.md
rename to website/snippets/_model-version-callout.md
diff --git a/website/src/components/versionCallout/index.js b/website/src/components/versionCallout/index.js
new file mode 100644
index 00000000000..598182c851f
--- /dev/null
+++ b/website/src/components/versionCallout/index.js
@@ -0,0 +1,23 @@
+import React from 'react';
+import Admonition from '@theme/Admonition';
+
+const VersionCallout = ({ version }) => {
+ if (!version) {
+ return null;
+ }
+
+ return (
+
+);
+};
+
+export default VersionCallout;
diff --git a/website/src/theme/MDXComponents/index.js b/website/src/theme/MDXComponents/index.js
index 422d6c99fab..c0a15e6c5b6 100644
--- a/website/src/theme/MDXComponents/index.js
+++ b/website/src/theme/MDXComponents/index.js
@@ -45,6 +45,7 @@ import Lifecycle from '@site/src/components/lifeCycle';
import DetailsToggle from '@site/src/components/detailsToggle';
import Expandable from '@site/src/components/expandable';
import ConfettiTrigger from '@site/src/components/confetti/';
+import VersionCallout from '@site/src/components/versionCallout';
const MDXComponents = {
Head,
@@ -97,5 +98,6 @@ const MDXComponents = {
Expandable: Expandable,
ConfettiTrigger: ConfettiTrigger,
SortableTable: SortableTable,
+ VersionCallout: VersionCallout,
};
export default MDXComponents;