diff --git a/website/docs/docs/use-dbt-semantic-layer/tableau.md b/website/docs/docs/use-dbt-semantic-layer/tableau.md
index 9bc32ec3622..c93643354aa 100644
--- a/website/docs/docs/use-dbt-semantic-layer/tableau.md
+++ b/website/docs/docs/use-dbt-semantic-layer/tableau.md
@@ -16,7 +16,8 @@ This integration provides a live connection to the dbt Semantic Layer through Ta
## Prerequisites
-1. You must have [Tableau Desktop](https://www.tableau.com/en-gb/products/desktop) installed
+1. You must have [Tableau Desktop](https://www.tableau.com/en-gb/products/desktop) installed with version 2021.1 or greater
+ - Note that Tableau Online does not currently support custom connectors natively.
2. Log in to Tableau Desktop using either your license or the login details you use for Tableau Server or Tableau Online.
3. You need your dbt Cloud host, [Environment ID](/docs/use-dbt-semantic-layer/setup-sl#set-up-dbt-semantic-layer) and [service token](/docs/dbt-cloud-apis/service-tokens) to log in. This account should be set up with the dbt Semantic Layer.
4. You must have a dbt Cloud Team or Enterprise [account](https://www.getdbt.com/pricing) and multi-tenant [deployment](/docs/cloud/about-cloud/regions-ip-addresses). (Single-Tenant coming soon)
@@ -24,7 +25,7 @@ This integration provides a live connection to the dbt Semantic Layer through Ta
## Installing
-1. Download our [connector file](https://github.com/dbt-labs/semantic-layer-tableau-connector/releases/download/v1.0.0/dbt_semantic_layer.taco) locally and add it to your default folder:
+1. Download the GitHub [connector file](https://github.com/dbt-labs/semantic-layer-tableau-connector/releases/download/v1.0.2/dbt_semantic_layer.taco) locally and add it to your default folder:
- Windows: `C:\Users\\[Windows User]\Documents\My Tableau Repository\Connectors`
- Mac: `/Users/[user]/Documents/My Tableau Repository/Connectors`
- Linux: `/opt/tableau/connectors`
@@ -53,6 +54,7 @@ Visit the [Tableau documentation](https://help.tableau.com/current/pro/desktop/e
- Since this is treated as a table, dbt Semantic Layer can't dynamically change what is available. This means we display _all_ available metrics and dimensions even if a particular metric and dimension combination isn't available.
- Certain Table calculations like "Totals" and "Percent Of" may not be accurate when using metrics aggregated in a non-additive way (such as count distinct)
+- In any of our Semantic Layer interfaces (not only Tableau), you must include a [time dimension](/docs/build/cumulative#limitations) when working with any cumulative metric that has a time window or granularity.
## Unsupported functionality
@@ -67,3 +69,4 @@ The following Tableau features aren't supported at this time, however, the dbt S
- All functions in Analysis --> Create Calculated Field
- Filtering on a Date Part time dimension for a Cumulative metric type
- Changing your date dimension to use "Week Number"
+
diff --git a/website/docs/faqs/Accounts/slack.md b/website/docs/faqs/Accounts/slack.md
deleted file mode 100644
index 4faa60fb09a..00000000000
--- a/website/docs/faqs/Accounts/slack.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: How do I set up Slack notifications?
-description: "Instructions on how to set up slack notifications"
-sidebar_label: 'How to set up Slack'
-id: slack
----
-
-
diff --git a/website/docs/faqs/Project/docs-for-multiple-projects.md b/website/docs/faqs/Project/docs-for-multiple-projects.md
deleted file mode 100644
index b7aa1452b39..00000000000
--- a/website/docs/faqs/Project/docs-for-multiple-projects.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Can I render docs for multiple projects?
-description: "Using packages to render docs for multiple projects"
-sidebar_label: 'Render docs for multiple projects'
-id: docs-for-multiple-projects
-
----
-
-Yes! To do this, you'll need to create a "super project" that lists each project as a dependent [package](/docs/build/packages) in a `packages.yml` file. Then run `dbt deps` to install the projects as packages, prior to running `dbt docs generate`.
-
-If you are going down the route of multiple projects, be sure to check out our advice [1](https://discourse.getdbt.com/t/should-i-have-an-organisation-wide-project-a-monorepo-or-should-each-work-flow-have-their-own/666) [2](https://discourse.getdbt.com/t/how-to-configure-your-dbt-repository-one-or-many/2121) on the topic.
diff --git a/website/docs/faqs/Project/which-schema.md b/website/docs/faqs/Project/which-schema.md
index f0634ac8c85..2c21cba3c6a 100644
--- a/website/docs/faqs/Project/which-schema.md
+++ b/website/docs/faqs/Project/which-schema.md
@@ -7,7 +7,7 @@ id: which-schema
---
By default, dbt builds models in your target schema. To change your target schema:
* If you're developing in **dbt Cloud**, these are set for each user when you first use a development environment.
-* If you're developing with the **dbt CLI**, this is the `schema:` parameter in your `profiles.yml` file.
+* If you're developing with **dbt Core**, this is the `schema:` parameter in your `profiles.yml` file.
If you wish to split your models across multiple schemas, check out the docs on [using custom schemas](/docs/build/custom-schemas).
diff --git a/website/docs/faqs/Runs/checking-logs.md b/website/docs/faqs/Runs/checking-logs.md
index dbfdb6806a1..ff5e6f5cf04 100644
--- a/website/docs/faqs/Runs/checking-logs.md
+++ b/website/docs/faqs/Runs/checking-logs.md
@@ -10,7 +10,7 @@ To check out the SQL that dbt is running, you can look in:
* dbt Cloud:
* Within the run output, click on a model name, and then select "Details"
-* dbt CLI:
+* dbt Core:
* The `target/compiled/` directory for compiled `select` statements
* The `target/run/` directory for compiled `create` statements
* The `logs/dbt.log` file for verbose logging.
diff --git a/website/docs/faqs/Runs/failed-tests.md b/website/docs/faqs/Runs/failed-tests.md
index bfee565ef61..d19023d035d 100644
--- a/website/docs/faqs/Runs/failed-tests.md
+++ b/website/docs/faqs/Runs/failed-tests.md
@@ -10,7 +10,7 @@ To debug a failing test, find the SQL that dbt ran by:
* dbt Cloud:
* Within the test output, click on the failed test, and then select "Details"
-* dbt CLI:
+* dbt Core:
* Open the file path returned as part of the error message.
* Navigate to the `target/compiled/schema_tests` directory for all compiled test queries
diff --git a/website/docs/faqs/Warehouse/database-privileges.md b/website/docs/faqs/Warehouse/database-privileges.md
index 73e0549f130..3761b81fe67 100644
--- a/website/docs/faqs/Warehouse/database-privileges.md
+++ b/website/docs/faqs/Warehouse/database-privileges.md
@@ -12,8 +12,8 @@ schema¹
* read system
views to generate documentation (i.e. views in
`information_schema`)
-On Postgres, Redshift, and Snowflake, use a series of `grants` to ensure that
-your user has the correct privileges.
+On Postgres, Redshift, Databricks, and Snowflake, use a series of `grants` to ensure that
+your user has the correct privileges. Check out [example permissions](/reference/database-permissions/about-database-permissions) for these warehouses.
On BigQuery, use the "BigQuery User" role to assign these privileges.
diff --git a/website/docs/guides/advanced/using-jinja.md b/website/docs/guides/advanced/using-jinja.md
index 40cfd2af298..1cbe88dc9ca 100644
--- a/website/docs/guides/advanced/using-jinja.md
+++ b/website/docs/guides/advanced/using-jinja.md
@@ -9,7 +9,7 @@ If you'd like to work through this query, add [this CSV](https://github.com/dbt-
While working through the steps of this model, we recommend that you have your compiled SQL open as well, to check what your Jinja compiles to. To do this:
* **Using dbt Cloud:** Click the compile button to see the compiled SQL in the right hand pane
-* **Using the dbt CLI:** Run `dbt compile` from the command line. Then open the compiled SQL file in the `target/compiled/{project name}/` directory. Use a split screen in your code editor to keep both files open at once.
+* **Using dbt Core:** Run `dbt compile` from the command line. Then open the compiled SQL file in the `target/compiled/{project name}/` directory. Use a split screen in your code editor to keep both files open at once.
## Write the SQL without Jinja
Consider a data model in which an `order` can have many `payments`. Each `payment` may have a `payment_method` of `bank_transfer`, `credit_card` or `gift_card`, and therefore each `order` can have multiple `payment_methods`
diff --git a/website/docs/guides/best-practices/debugging-errors.md b/website/docs/guides/best-practices/debugging-errors.md
index 39670820ddd..fe600ec4f67 100644
--- a/website/docs/guides/best-practices/debugging-errors.md
+++ b/website/docs/guides/best-practices/debugging-errors.md
@@ -17,7 +17,7 @@ Learning how to debug is a skill, and one that will make you great at your role!
- The `target/run` directory contains the SQL dbt executes to build your models.
- The `logs/dbt.log` file contains all the queries that dbt runs, and additional logging. Recent errors will be at the bottom of the file.
- **dbt Cloud users**: Use the above, or the `Details` tab in the command output.
- - **dbt CLI users**: Note that your code editor _may_ be hiding these files from the tree
[VSCode help](https://stackoverflow.com/questions/42891463/how-can-i-show-ignored-files-in-visual-studio-code)).
+ - **dbt Core users**: Note that your code editor _may_ be hiding these files from the tree
[VSCode help](https://stackoverflow.com/questions/42891463/how-can-i-show-ignored-files-in-visual-studio-code)).
5. If you are really stuck, try [asking for help](/community/resources/getting-help). Before doing so, take the time to write your question well so that others can diagnose the problem quickly.
@@ -184,7 +184,7 @@ hello: world # this is not allowed
## Compilation Errors
-_Note: if you're using the dbt Cloud IDE to work on your dbt project, this error often shows as a red bar in your command prompt as you work on your dbt project. For dbt CLI users, these won't get picked up until you run `dbt run` or `dbt compile`._
+_Note: if you're using the dbt Cloud IDE to work on your dbt project, this error often shows as a red bar in your command prompt as you work on your dbt project. For dbt Core users, these won't get picked up until you run `dbt run` or `dbt compile`._
### Invalid `ref` function
@@ -228,7 +228,7 @@ To fix this:
- Use the error message to find your mistake
To prevent this:
-- _(dbt CLI users only)_ Use snippets to auto-complete pieces of Jinja ([atom-dbt package](https://github.com/dbt-labs/atom-dbt), [vscode-dbt extestion](https://marketplace.visualstudio.com/items?itemName=bastienboutonnet.vscode-dbt))
+- _(dbt Core users only)_ Use snippets to auto-complete pieces of Jinja ([atom-dbt package](https://github.com/dbt-labs/atom-dbt), [vscode-dbt extestion](https://marketplace.visualstudio.com/items?itemName=bastienboutonnet.vscode-dbt))
@@ -280,7 +280,7 @@ To fix this:
- Find the mistake and fix it
To prevent this:
-- (dbt CLI users) Turn on indentation guides in your code editor to help you inspect your files
+- (dbt Core users) Turn on indentation guides in your code editor to help you inspect your files
- Use a YAML validator ([example](http://www.yamllint.com/)) to debug any issues
@@ -341,10 +341,10 @@ Database Error in model customers (models/customers.sql)
90% of the time, there's a mistake in the SQL of your model. To fix this:
1. Open the offending file:
- **dbt Cloud:** Open the model (in this case `models/customers.sql` as per the error message)
- - **dbt CLI:** Open the model as above. Also open the compiled SQL (in this case `target/run/jaffle_shop/models/customers.sql` as per the error message) — it can be useful to show these side-by-side in your code editor.
+ - **dbt Core:** Open the model as above. Also open the compiled SQL (in this case `target/run/jaffle_shop/models/customers.sql` as per the error message) — it can be useful to show these side-by-side in your code editor.
2. Try to re-execute the SQL to isolate the error:
- **dbt Cloud:** Use the `Preview` button from the model file
- - **dbt CLI:** Copy and paste the compiled query into a query runner (e.g. the Snowflake UI, or a desktop app like DataGrip / TablePlus) and execute it
+ - **dbt Core:** Copy and paste the compiled query into a query runner (e.g. the Snowflake UI, or a desktop app like DataGrip / TablePlus) and execute it
3. Fix the mistake.
4. Rerun the failed model.
@@ -356,7 +356,7 @@ In some cases, these errors might occur as a result of queries that dbt runs "be
In these cases, you should check out the logs — this contains _all_ the queries dbt has run.
- **dbt Cloud**: Use the `Details` in the command output to see logs, or check the `logs/dbt.log` file
-- **dbt CLI**: Open the `logs/dbt.log` file.
+- **dbt Core**: Open the `logs/dbt.log` file.
:::tip Isolating errors in the logs
If you're hitting a strange `Database Error`, it can be a good idea to clean out your logs by opening the file, and deleting the contents. Then, re-execute `dbt run` for _just_ the problematic model. The logs will _just_ have the output you're looking for.
@@ -379,6 +379,6 @@ Using the `Preview` button is useful when developing models and you want to visu
We’ve all been there. dbt uses the last-saved version of a file when you execute a command. In most code editors, and in the dbt Cloud IDE, a dot next to a filename indicates that a file has unsaved changes. Make sure you hit `cmd + s` (or equivalent) before running any dbt commands — over time it becomes muscle memory.
### Editing compiled files
-_(More likely for dbt CLI users)_
+_(More likely for dbt Core users)_
If you just opened a SQL file in the `target/` directory to help debug an issue, it's not uncommon to accidentally edit that file! To avoid this, try changing your code editor settings to grey out any files in the `target/` directory — the visual cue will help avoid the issue.
diff --git a/website/docs/guides/best-practices/how-we-mesh/mesh-2-structures.md b/website/docs/guides/best-practices/how-we-mesh/mesh-2-structures.md
index 937515954af..9ab633c50ad 100644
--- a/website/docs/guides/best-practices/how-we-mesh/mesh-2-structures.md
+++ b/website/docs/guides/best-practices/how-we-mesh/mesh-2-structures.md
@@ -18,6 +18,10 @@ At a high level, you’ll need to decide:
- Where to draw the lines between your dbt Projects -- i.e. how do you determine where to split your DAG and which models go in which project?
- How to manage your code -- do you want multiple dbt Projects living in the same repository (mono-repo) or do you want to have multiple repos with one repo per project?
+### Cycle detection
+
+Like resource dependencies, project dependencies are acyclic, meaning they only move in one direction. This prevents `ref` cycles (or loops), which lead to issues with your data workflows. For example, if project B depends on project A, a new model in project A could not import and use a public model from project B. Refer to [Project dependencies](/docs/collaborate/govern/project-dependencies#how-to-use-ref) for more information.
+
## Define your project interfaces by splitting your DAG
The first (and perhaps most difficult!) decision when migrating to a multi-project architecture is deciding where to draw the line in your DAG to define the interfaces between your projects. Let's explore some language for discussing the design of these patterns.
diff --git a/website/docs/guides/best-practices/how-we-mesh/mesh-3-implementation.md b/website/docs/guides/best-practices/how-we-mesh/mesh-3-implementation.md
index cfbbc7a1f28..65ed5d7935b 100644
--- a/website/docs/guides/best-practices/how-we-mesh/mesh-3-implementation.md
+++ b/website/docs/guides/best-practices/how-we-mesh/mesh-3-implementation.md
@@ -26,7 +26,7 @@ Once you have a sense of some initial groupings, you can first implement **group
groups:
- name: marketing
owner:
- - name: Ben Jaffleck
+ name: Ben Jaffleck
email: ben.jaffleck@jaffleshop.com
```
diff --git a/website/docs/guides/best-practices/materializations/materializations-guide-4-incremental-models.md b/website/docs/guides/best-practices/materializations/materializations-guide-4-incremental-models.md
index 603cbc8cda1..cd4264bafd3 100644
--- a/website/docs/guides/best-practices/materializations/materializations-guide-4-incremental-models.md
+++ b/website/docs/guides/best-practices/materializations/materializations-guide-4-incremental-models.md
@@ -29,7 +29,7 @@ We did our last `dbt build` job on `2022-01-31`, so any new orders since that ru
- 🏔️ build the table from the **beginning of time again — a _table materialization_**
- Simple and solid, if we can afford to do it (in terms of time, compute, and money — which are all directly correlated in a cloud warehouse). It’s the easiest and most accurate option.
- 🤏 find a way to run **just new and updated rows since our previous run — _an_ _incremental materialization_**
- - If we _can’t_ realistically afford to run the whole table — due to complex transformations or big source data, it takes too long — then we want to build incrementally. We want to just transform and add the row with id 567 below, _not_ the previous two with ids 123 and 456 that are already in the table.
+ - If we _can’t_ realistically afford to run the whole table — due to complex transformations or big source data, it takes too long — then we want to build incrementally. We want to just transform and add the row with id 567 below, _not_ the previous two with ids 123 and 234 that are already in the table.
| order_id | order_status | customer_id | order_item_id | ordered_at | updated_at |
| -------- | ------------ | ----------- | ------------- | ---------- | ---------- |
diff --git a/website/docs/guides/best-practices/materializations/materializations-guide-6-examining-builds.md b/website/docs/guides/best-practices/materializations/materializations-guide-6-examining-builds.md
index 07811b42594..909618ef8a5 100644
--- a/website/docs/guides/best-practices/materializations/materializations-guide-6-examining-builds.md
+++ b/website/docs/guides/best-practices/materializations/materializations-guide-6-examining-builds.md
@@ -12,7 +12,7 @@ hoverSnippet: Read this guide to understand how to examine your builds in dbt.
- ⌚ dbt keeps track of how **long each model took to build**, when it started, when it finished, its completion status (error, warn, or success), its materialization type, and _much_ more.
- 🖼️ This information is stored in a couple files which dbt calls **artifacts**.
- 📊 Artifacts contain a ton of information in JSON format, so aren’t easy to read, but **dbt Cloud** packages the most useful bits of information into a tidy **visualization** for you.
-- ☁️ If you’re not using Cloud, we can still use the output of the **dbt CLI to understand our runs**.
+- ☁️ If you’re not using Cloud, we can still use the output of the **dbt Core CLI to understand our runs**.
### Model Timing
@@ -23,9 +23,9 @@ That’s where dbt Cloud’s Model Timing visualization comes in extremely handy
- 🧵 This view lets us see our **mapped out in threads** (up to 64 threads, we’re currently running with 4, so we get 4 tracks) over time. You can think of **each thread as a lane on a highway**.
- ⌛ We can see above that `customer_status_histories` is **taking by far the most time**, so we may want to go ahead and **make that incremental**.
-If you aren’t using dbt Cloud, that’s okay! We don’t get a fancy visualization out of the box, but we can use the output from the dbt CLI to check our model times, and it’s a great opportunity to become familiar with that output.
+If you aren’t using dbt Cloud, that’s okay! We don’t get a fancy visualization out of the box, but we can use the output from the dbt Core CLI to check our model times, and it’s a great opportunity to become familiar with that output.
-### dbt CLI output
+### dbt Core CLI output
If you’ve ever run dbt, whether `build`, `test`, `run` or something else, you’ve seen some output like below. Let’s take a closer look at how to read this.
diff --git a/website/docs/guides/dbt-ecosystem/dbt-python-snowpark/6-foundational-structure.md b/website/docs/guides/dbt-ecosystem/dbt-python-snowpark/6-foundational-structure.md
index e387b208dd1..8a938e10c34 100644
--- a/website/docs/guides/dbt-ecosystem/dbt-python-snowpark/6-foundational-structure.md
+++ b/website/docs/guides/dbt-ecosystem/dbt-python-snowpark/6-foundational-structure.md
@@ -71,7 +71,7 @@ In this step, we’ll need to create a development branch and set up project lev
- `materialized` — Tells dbt how to materialize models when compiling the code before it pushes it down to Snowflake. All models in the `marts` folder will be built as tables.
- `tags` — Applies tags at a directory level to all models. All models in the `aggregates` folder will be tagged as `bi` (abbreviation for business intelligence).
- `docs` — Specifies the `node_color` either by the plain color name or a hex value.
-5. [Materializations](/docs/build/materializations) are strategies for persisting dbt models in a warehouse, with `tables` and `views` being the most commonly utilized types. By default, all dbt models are materialized as views and other materialization types can be configured in the `dbt_project.yml` file or in a model itself. It’s very important to note *Python models can only be materialized as tables or incremental models.* Since all our Python models exist under `marts`, the following portion of our `dbt_project.yml` ensures no errors will occur when we run our Python models. Starting with [dbt version 1.4](/guides/migration/versions/upgrading-to-v1.4#updates-to-python-models), Python files will automatically get materialized as tables even if not explicitly specified.
+5. [Materializations](/docs/build/materializations) are strategies for persisting dbt models in a warehouse, with `tables` and `views` being the most commonly utilized types. By default, all dbt models are materialized as views and other materialization types can be configured in the `dbt_project.yml` file or in a model itself. It’s very important to note *Python models can only be materialized as tables or incremental models.* Since all our Python models exist under `marts`, the following portion of our `dbt_project.yml` ensures no errors will occur when we run our Python models. Starting with [dbt version 1.4](/docs/dbt-versions/core-upgrade/upgrading-to-v1.4#updates-to-python-models), Python files will automatically get materialized as tables even if not explicitly specified.
```yaml
marts:
diff --git a/website/docs/guides/migration/tools/migrating-from-spark-to-databricks.md b/website/docs/guides/migration/tools/migrating-from-spark-to-databricks.md
index f5549c58416..cd0577c2d96 100644
--- a/website/docs/guides/migration/tools/migrating-from-spark-to-databricks.md
+++ b/website/docs/guides/migration/tools/migrating-from-spark-to-databricks.md
@@ -35,7 +35,7 @@ In both dbt Core and dbt Cloud, you can migrate your projects to the Databricks-
### Prerequisites
-- Your project must be compatible with dbt 1.0 or greater. Refer to [Upgrading to v1.0](/guides/migration/versions/upgrading-to-v1.0) for details. For the latest version of dbt, refer to [Upgrading to v1.3](/guides/migration/versions/upgrading-to-v1.3).
+- Your project must be compatible with dbt 1.0 or greater. Refer to [Upgrading to v1.0](/docs/dbt-versions/core-upgrade/upgrading-to-v1.0) for details. For the latest version of dbt, refer to [Upgrading to v1.3](/docs/dbt-versions/core-upgrade/upgrading-to-v1.3).
- For dbt Cloud, you need administrative (admin) privileges to migrate dbt projects.
diff --git a/website/docs/guides/orchestration/webhooks/zapier-ms-teams.md b/website/docs/guides/orchestration/webhooks/zapier-ms-teams.md
index bb3f03ef0c0..148e16b2469 100644
--- a/website/docs/guides/orchestration/webhooks/zapier-ms-teams.md
+++ b/website/docs/guides/orchestration/webhooks/zapier-ms-teams.md
@@ -5,7 +5,7 @@ slug: zapier-ms-teams
description: Use Zapier and the dbt Cloud API to post to Microsoft Teams
---
-This guide will show you how to set up an integration between dbt Cloud jobs and Microsoft Teams using [dbt Cloud Webhooks](/docs/deploy/webhooks) and Zapier, similar to the [native Slack integration](/faqs/accounts/slack).
+This guide will show you how to set up an integration between dbt Cloud jobs and Microsoft Teams using [dbt Cloud Webhooks](/docs/deploy/webhooks) and Zapier, similar to the [native Slack integration](/docs/deploy/job-notifications#slack-notifications).
When a dbt Cloud job finishes running, the integration will:
diff --git a/website/docs/guides/orchestration/webhooks/zapier-slack.md b/website/docs/guides/orchestration/webhooks/zapier-slack.md
index c9046ee9943..6ce89eadd12 100644
--- a/website/docs/guides/orchestration/webhooks/zapier-slack.md
+++ b/website/docs/guides/orchestration/webhooks/zapier-slack.md
@@ -5,7 +5,7 @@ slug: zapier-slack
description: Use Zapier and the dbt Cloud API to post error context to Slack
---
-This guide will show you how to set up an integration between dbt Cloud jobs and Slack using [dbt Cloud webhooks](/docs/deploy/webhooks) and Zapier. It builds on the native [native Slack integration](/faqs/accounts/slack) by attaching error message details of models and tests in a thread.
+This guide will show you how to set up an integration between dbt Cloud jobs and Slack using [dbt Cloud webhooks](/docs/deploy/webhooks) and Zapier. It builds on the native [native Slack integration](/docs/deploy/job-notifications#slack-notifications) by attaching error message details of models and tests in a thread.
Note: Because there is not a webhook for Run Cancelled, you may want to keep the standard Slack integration installed to receive those notifications. You could also use the [alternative integration](#alternate-approach) that augments the native integration without replacing it.
diff --git a/website/docs/quickstarts/manual-install-qs.md b/website/docs/quickstarts/manual-install-qs.md
index 2444cf29d7e..fc43d38115b 100644
--- a/website/docs/quickstarts/manual-install-qs.md
+++ b/website/docs/quickstarts/manual-install-qs.md
@@ -196,7 +196,7 @@ $ git checkout -b add-customers-model
4. From the command line, enter `dbt run`.
-
+
When you return to the BigQuery console, you can `select` from this model.
diff --git a/website/docs/reference/artifacts/other-artifacts.md b/website/docs/reference/artifacts/other-artifacts.md
index d776bc8a099..205bdfc1a14 100644
--- a/website/docs/reference/artifacts/other-artifacts.md
+++ b/website/docs/reference/artifacts/other-artifacts.md
@@ -39,4 +39,8 @@ This file is useful for investigating performance issues in dbt Core's graph alg
It is more anonymized and compact than [`manifest.json`](/reference/artifacts/manifest-json) and [`graph.gpickle`](#graph.gpickle).
-It contains only the `name` and `type` of each node along with IDs of its child nodes (`succ`). It includes that information at two separate points in time: immediately after the graph is linked together (`linked`), and after test edges have been added (`with_test_edges`).
+It includes that information at two separate points in time:
+1. `linked` — immediately after the graph is linked together, and
+2. `with_test_edges` — after test edges have been added.
+
+Each of those points in time contains the `name` and `type` of each node and `succ` contains the keys of its child nodes.
diff --git a/website/docs/reference/commands/clone.md b/website/docs/reference/commands/clone.md
index ea3e570447d..6bdc2c02e07 100644
--- a/website/docs/reference/commands/clone.md
+++ b/website/docs/reference/commands/clone.md
@@ -16,6 +16,7 @@ The `clone` command is useful for:
- handling incremental models in dbt Cloud CI jobs (on data warehouses that support zero-copy cloning tables)
- testing code changes on downstream dependencies in your BI tool
+
```bash
# clone all of my models from specified state to my target schema(s)
dbt clone --state path/to/artifacts
@@ -37,3 +38,19 @@ Unlike deferral, `dbt clone` requires some compute and creation of additional ob
For example, by creating actual data warehouse objects, `dbt clone` allows you to test out your code changes on downstream dependencies _outside of dbt_ (such as a BI tool).
As another example, you could `clone` your modified incremental models as the first step of your dbt Cloud CI job to prevent costly `full-refresh` builds for warehouses that support zero-copy cloning.
+
+## Cloning in dbt Cloud
+
+You can clone nodes between states in dbt Cloud using the `dbt clone` command. This is available in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) and the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) and relies on the [`--defer`](/reference/node-selection/defer) feature. For more details on defer in dbt Cloud, read [Using defer in dbt Cloud](/docs/cloud/about-cloud-develop-defer).
+
+- **Using dbt Cloud CLI** — The `dbt clone` command in the dbt Cloud CLI automatically includes the `--defer` flag. This means you can use the `dbt clone` command without any additional setup.
+
+- **Using dbt Cloud IDE** — To use the `dbt clone` command in the dbt Cloud IDE, follow these steps before running the `dbt clone` command:
+
+ - Set up your **Production environment** and have a successful job run.
+ - Enable **Defer to production** by toggling the switch in the lower-right corner of the command bar.
+
+ - Run the `dbt clone` command from the command bar.
+
+
+Check out [this Developer blog post](https://docs.getdbt.com/blog/to-defer-or-to-clone) for more details on best practices when to use `dbt clone` vs. deferral.
diff --git a/website/docs/reference/database-permissions/about-database-permissions.md b/website/docs/reference/database-permissions/about-database-permissions.md
new file mode 100644
index 00000000000..76fff517646
--- /dev/null
+++ b/website/docs/reference/database-permissions/about-database-permissions.md
@@ -0,0 +1,36 @@
+---
+title: "Database permissions"
+id: about-database-permissions
+description: "Database permissions are access rights and privileges granted to users or roles within a database management system."
+sidebar_label: "About database permissions"
+pagination_next: "reference/database-permissions/databricks-permissions"
+pagination_prev: null
+---
+
+Database permissions are access rights and privileges granted to users or roles within a database or data platform. They help you specify what actions users or roles can perform on various database objects, like tables, views, schemas, or even the entire database.
+
+
+### Why are they useful
+
+- Database permissions are essential for security and data access control.
+- They ensure that only authorized users can perform specific actions.
+- They help maintain data integrity, prevent unauthorized changes, and limit exposure to sensitive data.
+- Permissions also support compliance with data privacy regulations and auditing.
+
+### How to use them
+
+- Users and administrators can grant and manage permissions at various levels (such as table, schema, and so on) using SQL statements or through the database system's interface.
+- Assign permissions to individual users or roles (groups of users) based on their responsibilities.
+ - Typical permissions include "SELECT" (read), "INSERT" (add data), "UPDATE" (modify data), "DELETE" (remove data), and administrative rights like "CREATE" and "DROP."
+- Users should be assigned permissions that ensure they have the necessary access to perform their tasks without overextending privileges.
+
+Something to note is that each data platform provider might have different approaches and names for privileges. Refer to their documentation for more details.
+
+### Examples
+
+Refer to the following database permission pages for more info on examples and how to set up database permissions:
+
+- [Databricks](/reference/database-permissions/databricks-permissions)
+- [Postgres](/reference/database-permissions/postgres-permissions)
+- [Redshift](/reference/database-permissions/redshift-permissions)
+- [Snowflake](/reference/database-permissions/snowflake-permissions)
diff --git a/website/docs/reference/database-permissions/databricks-permissions.md b/website/docs/reference/database-permissions/databricks-permissions.md
new file mode 100644
index 00000000000..12e24652ae3
--- /dev/null
+++ b/website/docs/reference/database-permissions/databricks-permissions.md
@@ -0,0 +1,20 @@
+---
+title: "Databricks permissions"
+---
+
+In Databricks, permissions are used to control who can perform certain actions on different database objects. Use SQL statements to manage permissions in a Databricks database.
+
+## Example Databricks permissions
+
+The following example provides you with the SQL statements you can use to manage permissions.
+
+**Note** that you can grant permissions on `securable_objects` to `principals` (This can be user, service principal, or group). For example, `grant privilege_type` on `securable_object` to `principal`.
+
+```
+
+grant all privileges on schema schema_name to principal;
+grant create table on schema schema_name to principal;
+grant create view on schema schema_name to principal;
+```
+
+Check out the [official documentation](https://docs.databricks.com/en/data-governance/unity-catalog/manage-privileges/privileges.html#privilege-types-by-securable-object-in-unity-catalog) for more information.
diff --git a/website/docs/reference/database-permissions/postgres-permissions.md b/website/docs/reference/database-permissions/postgres-permissions.md
new file mode 100644
index 00000000000..da56e9b45f2
--- /dev/null
+++ b/website/docs/reference/database-permissions/postgres-permissions.md
@@ -0,0 +1,25 @@
+---
+title: "Postgres Permissions"
+---
+
+
+In Postgres, permissions are used to control who can perform certain actions on different database objects. Use SQL statements to manage permissions in a Postgres database.
+
+## Example Postgres permissions
+
+The following example provides you with the SQL statements you can use to manage permissions. These examples allow you to run dbt smoothly without encountering permission issues, such as creating schemas, reading existing data, and accessing the information schema.
+
+**Note** that `database_name`, `database.schema_name`, and `user_name` are placeholders and you can replace them as needed for your organization's naming convention.
+
+```
+grant usage on database database_name to user_name;
+grant create schema on database database_name to user_name;
+grant usage on schema database.schema_name to user_name;
+grant create table on schema database.schema_name to user_name;
+grant create view on schema database.schema_name to user_name;
+grant usage on all schemas in database database_name to user_name;
+grant select on all tables in database database_name to user_name;
+grant select on all views in database database_name to user_name;
+```
+
+Check out the [official documentation](https://www.postgresql.org/docs/current/sql-grant.html) for more information.
diff --git a/website/docs/reference/database-permissions/redshift-permissions.md b/website/docs/reference/database-permissions/redshift-permissions.md
new file mode 100644
index 00000000000..5f0949a3528
--- /dev/null
+++ b/website/docs/reference/database-permissions/redshift-permissions.md
@@ -0,0 +1,25 @@
+---
+title: "Redshift permissions"
+---
+
+In Redshift, permissions are used to control who can perform certain actions on different database objects. Use SQL statements to manage permissions in a Redshift database.
+
+## Example Redshift permissions
+
+The following example provides you with the SQL statements you can use to manage permissions.
+
+**Note** that `database_name`, `database.schema_name`, and `user_name` are placeholders and you can replace them as needed for your organization's naming convention.
+
+
+```
+grant usage on database database_name to user_name;
+grant create schema on database database_name to user_name;
+grant usage on schema database.schema_name to user_name;
+grant create table on schema database.schema_name to user_name;
+grant create view on schema database.schema_name to user_name;
+grant usage on all schemas in database database_name to user_name;
+grant select on all tables in database database_name to user_name;
+grant select on all views in database database_name to user_name;
+```
+
+Check out the [official documentation](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html) for more information.
diff --git a/website/docs/reference/database-permissions/snowflake-permissions.md b/website/docs/reference/database-permissions/snowflake-permissions.md
new file mode 100644
index 00000000000..3f474242834
--- /dev/null
+++ b/website/docs/reference/database-permissions/snowflake-permissions.md
@@ -0,0 +1,154 @@
+---
+title: "Snowflake permissions"
+---
+
+In Snowflake, permissions are used to control who can perform certain actions on different database objects. Use SQL statements to manage permissions in a Snowflake database.
+
+## Set up Snowflake account
+
+This section explains how to set up permissions and roles within Snowflake. In Snowflake, you would perform these actions using SQL commands and set up your data warehouse and access control within Snowflake's ecosystem.
+
+1. Set up databases
+```
+use role sysadmin;
+create database raw;
+create database analytics;
+```
+2. Set up warehouses
+```
+create warehouse loading
+ warehouse_size = xsmall
+ auto_suspend = 3600
+ auto_resume = false
+ initially_suspended = true;
+
+create warehouse transforming
+ warehouse_size = xsmall
+ auto_suspend = 60
+ auto_resume = true
+ initially_suspended = true;
+
+create warehouse reporting
+ warehouse_size = xsmall
+ auto_suspend = 60
+ auto_resume = true
+ initially_suspended = true;
+```
+
+3. Set up roles and warehouse permissions
+```
+use role securityadmin;
+
+create role loader;
+grant all on warehouse loading to role loader;
+
+create role transformer;
+grant all on warehouse transforming to role transformer;
+
+create role reporter;
+grant all on warehouse reporting to role reporter;
+```
+
+4. Create users, assigning them to their roles
+
+Every person and application gets a separate user and is assigned to the correct role.
+
+```
+create user stitch_user -- or fivetran_user
+ password = '_generate_this_'
+ default_warehouse = loading
+ default_role = loader;
+
+create user claire -- or amy, jeremy, etc.
+ password = '_generate_this_'
+ default_warehouse = transforming
+ default_role = transformer
+ must_change_password = true;
+
+create user dbt_cloud_user
+ password = '_generate_this_'
+ default_warehouse = transforming
+ default_role = transformer;
+
+create user looker_user -- or mode_user etc.
+ password = '_generate_this_'
+ default_warehouse = reporting
+ default_role = reporter;
+
+-- then grant these roles to each user
+grant role loader to user stitch_user; -- or fivetran_user
+grant role transformer to user dbt_cloud_user;
+grant role transformer to user claire; -- or amy, jeremy
+grant role reporter to user looker_user; -- or mode_user, periscope_user
+```
+
+5. Let loader load data
+Give the role unilateral permission to operate on the raw database
+```
+use role sysadmin;
+grant all on database raw to role loader;
+```
+
+6. Let transformer transform data
+The transformer role needs to be able to read raw data.
+
+If you do this before you have any data loaded, you can run:
+```
+grant usage on database raw to role transformer;
+grant usage on future schemas in database raw to role transformer;
+grant select on future tables in database raw to role transformer;
+grant select on future views in database raw to role transformer;
+```
+If you already have data loaded in the raw database, make sure also you run the following to update the permissions
+```
+grant usage on all schemas in database raw to role transformer;
+grant select on all tables in database raw to role transformer;
+grant select on all views in database raw to role transformer;
+```
+transformer also needs to be able to create in the analytics database:
+```
+grant all on database analytics to role transformer;
+```
+7. Let reporter read the transformed data
+A previous version of this article recommended this be implemented through hooks in dbt, but this way lets you get away with a one-off statement.
+```
+grant usage on database analytics to role reporter;
+grant usage on future schemas in database analytics to role reporter;
+grant select on future tables in database analytics to role reporter;
+grant select on future views in database analytics to role reporter;
+```
+Again, if you already have data in your analytics database, make sure you run:
+```
+grant usage on all schemas in database analytics to role reporter;
+grant select on all tables in database analytics to role transformer;
+grant select on all views in database analytics to role transformer;
+```
+8. Maintain
+When new users are added, make sure you add them to the right role! Everything else should be inherited automatically thanks to those `future` grants.
+
+For more discussion and legacy information, refer to [this Discourse article](https://discourse.getdbt.com/t/setting-up-snowflake-the-exact-grant-statements-we-run/439).
+
+## Example Snowflake permissions
+
+The following example provides you with the SQL statements you can use to manage permissions.
+
+**Note** that `warehouse_name`, `database_name`, and `role_name` are placeholders and you can replace them as needed for your organization's naming convention.
+
+```
+
+grant all on warehouse warehouse_name to role role_name;
+grant usage on database database_name to role role_name;
+grant create schema on database database_name to role role_name;
+grant usage on schema database.an_existing_schema to role role_name;
+grant create table on schema database.an_existing_schema to role role_name;
+grant create view on schema database.an_existing_schema to role role_name;
+grant usage on future schemas in database database_name to role role_name;
+grant monitor on future schemas in database database_name to role role_name;
+grant select on future tables in database database_name to role role_name;
+grant select on future views in database database_name to role role_name;
+grant usage on all schemas in database database_name to role role_name;
+grant monitor on all schemas in database database_name to role role_name;
+grant select on all tables in database database_name to role role_name;
+grant select on all views in database database_name to role role_name;
+```
+
diff --git a/website/docs/reference/dbt-commands.md b/website/docs/reference/dbt-commands.md
index 1448d9849d3..d5f0bfcd2ad 100644
--- a/website/docs/reference/dbt-commands.md
+++ b/website/docs/reference/dbt-commands.md
@@ -11,7 +11,7 @@ The following sections outline the commands supported by dbt and their relevant
### Available commands
-
+
All commands in the table are compatible with either the dbt Cloud IDE, dbt Cloud CLI, or dbt Core.
@@ -22,12 +22,13 @@ You can run dbt commands in your specific tool by prefixing them with `dbt`. Fo
| [build](/reference/commands/build) | Build and test all selected resources (models, seeds, snapshots, tests) | All | All [supported versions](/docs/dbt-versions/core) |
| cancel | Cancels the most recent invocation.| dbt Cloud CLI | Requires [dbt v1.6 or higher](/docs/dbt-versions/core) |
| [clean](/reference/commands/clean) | Deletes artifacts present in the dbt project | All | All [supported versions](/docs/dbt-versions/core) |
-| [clone](/reference/commands/clone) | Clone selected models from the specified state | dbt Cloud CLI
dbt Core | Requires [dbt v1.6 or higher](/docs/dbt-versions/core) |
+| [clone](/reference/commands/clone) | Clone selected models from the specified state | All | Requires [dbt v1.6 or higher](/docs/dbt-versions/core) |
| [compile](/reference/commands/compile) | Compiles (but does not run) the models in a project | All | All [supported versions](/docs/dbt-versions/core) |
-| [debug](/reference/commands/debug) | Debugs dbt connections and projects | dbt Core | All [supported versions](/docs/dbt-versions/core) |
+| [debug](/reference/commands/debug) | Debugs dbt connections and projects | dbt Cloud IDE
dbt Core | All [supported versions](/docs/dbt-versions/core) |
| [deps](/reference/commands/deps) | Downloads dependencies for a project | All | All [supported versions](/docs/dbt-versions/core) |
| [docs](/reference/commands/cmd-docs) | Generates documentation for a project | All | All [supported versions](/docs/dbt-versions/core) |
| help | Displays help information for any command | dbt Core
dbt Cloud CLI | All [supported versions](/docs/dbt-versions/core) |
+| [init](/reference/commands/init) | Initializes a new dbt project | dbt Core | All [supported versions](/docs/dbt-versions/core) |
| [list](/reference/commands/list) | Lists resources defined in a dbt project | All | All [supported versions](/docs/dbt-versions/core) |
| [parse](/reference/commands/parse) | Parses a project and writes detailed timing info | All | All [supported versions](/docs/dbt-versions/core) |
| reattach | Reattaches to the most recent invocation to retrieve logs and artifacts. | dbt Cloud CLI | Requires [dbt v1.6 or higher](/docs/dbt-versions/core) |
@@ -39,11 +40,11 @@ You can run dbt commands in your specific tool by prefixing them with `dbt`. Fo
| [snapshot](/reference/commands/snapshot) | Executes "snapshot" jobs defined in a project | All | All [supported versions](/docs/dbt-versions/core) |
| [source](/reference/commands/source) | Provides tools for working with source data (including validating that sources are "fresh") | All | All [supported versions](/docs/dbt-versions/core) |
| [test](/reference/commands/test) | Executes tests defined in a project | All | All [supported versions](/docs/dbt-versions/core) |
-| [init](/reference/commands/init) | Initializes a new dbt project | dbt Core | All [supported versions](/docs/dbt-versions/core) |
+
-
+
Select the tabs that are relevant to your development workflow. For example, if you develop in the dbt Cloud IDE, select **dbt Cloud**.
diff --git a/website/docs/reference/dbt-jinja-functions/target.md b/website/docs/reference/dbt-jinja-functions/target.md
index 7d6627c5a4b..e7d08db592f 100644
--- a/website/docs/reference/dbt-jinja-functions/target.md
+++ b/website/docs/reference/dbt-jinja-functions/target.md
@@ -7,7 +7,7 @@ description: "Contains information about your connection to the warehouse."
`target` contains information about your connection to the warehouse.
-* **dbt CLI:** These values are based on the target defined in your [`profiles.yml` file](/docs/core/connect-data-platform/profiles.yml)
+* **dbt Core:** These values are based on the target defined in your [`profiles.yml` file](/docs/core/connect-data-platform/profiles.yml)
* **dbt Cloud Scheduler:**
* `target.name` is defined per job as described [here](/docs/build/custom-target-names).
* For all other attributes, the values are defined by the deployment connection. To check these values, click **Deploy** from the upper left and select **Environments**. Then, select the relevant deployment environment, and click **Settings**.
diff --git a/website/docs/reference/dbt_project.yml.md b/website/docs/reference/dbt_project.yml.md
index 571e930d7da..9bd85d0d5dd 100644
--- a/website/docs/reference/dbt_project.yml.md
+++ b/website/docs/reference/dbt_project.yml.md
@@ -11,7 +11,7 @@ By default, dbt will look for `dbt_project.yml` in your current working director
By default, dbt will look for `dbt_project.yml` in your current working directory and its parents, but you can set a different directory using the `--project-dir` flag or the `DBT_PROJECT_DIR` environment variable.
-Starting from dbt v1.5 and higher, you can specify your dbt Cloud project ID in the `dbt_project.yml` file using the `dbt-cloud` config, which doesn't require validation or storage in the project config class. To find your project ID, check your dbt Cloud project URL, such as `https://cloud.getdbt.com/11/projects/123456`, where the project ID is `123456`.
+Starting from dbt v1.5 and higher, you can specify your dbt Cloud project ID in the `dbt_project.yml` file using `project-id` under the `dbt-cloud` config. To find your project ID, check your dbt Cloud project URL, such as `https://cloud.getdbt.com/11/projects/123456`, where the project ID is `123456`.
@@ -54,8 +54,8 @@ dbt uses YAML in a few different places. If you're new to YAML, it would be wort
[require-dbt-version](/reference/project-configs/require-dbt-version): version-range | [version-range]
[dbt-cloud](/docs/cloud/cloud-cli-installation):
- project-id: project_id #Required
- defer-env-id: 5678 #Optional
+ [project-id](/docs/cloud/configure-cloud-cli#configure-the-dbt-cloud-cli): project_id # Required
+ [defer-env-id](/docs/cloud/about-cloud-develop-defer#defer-in-dbt-cloud-cli): environment_id # Optional
[quoting](/reference/project-configs/quoting):
database: true | false
diff --git a/website/docs/reference/global-configs/about-global-configs.md b/website/docs/reference/global-configs/about-global-configs.md
index 42819cdac8f..9d1691812b5 100644
--- a/website/docs/reference/global-configs/about-global-configs.md
+++ b/website/docs/reference/global-configs/about-global-configs.md
@@ -8,4 +8,11 @@ Global configs enable you to fine-tune _how_ dbt runs projects on your machine
Global configs control things like the visual output of logs, the manner in which dbt parses your project, and what to do when dbt finds a version mismatch or a failing model. These configs are "global" because they are available for all dbt commands, and because they can be set for all projects running on the same machine or in the same environment.
-Starting in v1.0, you can set global configs in three places. When all three are set, command line flags take precedence, then environment variables, and last yaml configs (usually `profiles.yml`).
\ No newline at end of file
+### Global config precedence
+
+Starting in v1.0, you can set global configs in three places. dbt will evaluate the configs in the following order:
+1. [user config](https://docs.getdbt.com/reference/global-configs/yaml-configurations)
+1. [environment variable](https://docs.getdbt.com/reference/global-configs/environment-variable-configs)
+1. [CLI flag](https://docs.getdbt.com/reference/global-configs/command-line-flags)
+
+Each config is prioritized over the previous one. For example, if all three are provided, then the CLI flag takes precedence.
diff --git a/website/docs/reference/programmatic-invocations.md b/website/docs/reference/programmatic-invocations.md
index 6afcd65c1bc..dfd5bae09e6 100644
--- a/website/docs/reference/programmatic-invocations.md
+++ b/website/docs/reference/programmatic-invocations.md
@@ -2,7 +2,7 @@
title: "Programmatic invocations"
---
-In v1.5, dbt-core added support for programmatic invocations. The intent is to expose the existing dbt CLI via a Python entry point, such that top-level commands are callable from within a Python script or application.
+In v1.5, dbt-core added support for programmatic invocations. The intent is to expose the existing dbt Core CLI via a Python entry point, such that top-level commands are callable from within a Python script or application.
The entry point is a `dbtRunner` class, which allows you to `invoke` the same commands as on the CLI.
diff --git a/website/docs/reference/references-overview.md b/website/docs/reference/references-overview.md
index 85a374c5aa3..91a228b6c3e 100644
--- a/website/docs/reference/references-overview.md
+++ b/website/docs/reference/references-overview.md
@@ -51,9 +51,27 @@ Learn how to add more configurations to your dbt project or adapter, use propert
icon="computer"/>
+
+
+
+
+
+
diff --git a/website/docs/reference/resource-configs/resource-path.md b/website/docs/reference/resource-configs/resource-path.md
index 258b83dcd57..20406f26f2a 100644
--- a/website/docs/reference/resource-configs/resource-path.md
+++ b/website/docs/reference/resource-configs/resource-path.md
@@ -1,11 +1,28 @@
-The `