diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md
new file mode 100644
index 00000000000..b546f258f6c
--- /dev/null
+++ b/website/blog/2023-12-20-partner-integration-guide.md
@@ -0,0 +1,99 @@
+---
+title: "How to integrate with dbt"
+description: "This guide will cover the ways to integrate with dbt Cloud"
+slug: integrating-with-dbtcloud
+
+authors: [amy_chen]
+
+tags: [dbt Cloud, Integrations, APIs]
+hide_table_of_contents: false
+
+date: 2023-12-20
+is_featured: false
+---
+## Overview
+
+Over the course of my three years running the Partner Engineering team at dbt Labs, the most common question I've been asked is, How do we integrate with dbt? Because those conversations often start out at the same place, I decided to create this guide so I’m no longer the blocker to fundamental information. This also allows us to skip the intro and get to the fun conversations so much faster, like what a joint solution for our customers would look like.
+
+This guide doesn't include how to integrate with dbt Core. If you’re interested in creating a dbt adapter, please check out the [adapter development guide](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) instead.
+
+Instead, we're going to focus on integrating with dbt Cloud. Integrating with dbt Cloud is a key requirement to become a dbt Labs technology partner, opening the door to a variety of collaborative commercial opportunities.
+
+Here I'll cover how to get started, potential use cases you want to solve for, and points of integrations to do so.
+
+## New to dbt Cloud?
+
+If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](https://docs.getdbt.com/guides) after reading [What is dbt](https://docs.getdbt.com/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration.
+
+If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. This account may only be used for development, training, and demonstration purposes. Please contact your partner manager if you're interested and provide the account ID (provided in the URL). Our partner account includes all of the enterprise level functionality and can be provided with a signed partnerships agreement.
+
+## Integration points
+
+- [Discovery API (formerly referred to as Metadata API)](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-api)
+ - **Overview** — This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project.
+- [Administrative (Admin) API](https://docs.getdbt.com/docs/dbt-cloud-apis/admin-cloud-api)
+ - **Overview** — This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead.
+- [Webhooks](https://docs.getdbt.com/docs/deploy/webhooks)
+ - **Overview** — Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information about your dbt jobs in real time.
+- [Semantic Layers/Metrics](https://docs.getdbt.com/docs/dbt-cloud-apis/sl-api-overview)
+ - **Overview** — Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](https://docs.getdbt.com/guides/dbt-ecosystem/sl-partner-integration-guide).
+ - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is that the Discovery API isn't able to pull the semantic graph, which provides the list of available dimensions that one can query per metric. That is only available with the SL Driver/APIs. The trade-off is that the SL Driver/APIs doesn't have access to the lineage of the entire dbt project (that is, how the dbt metrics dependencies on dbt models).
+ - Three integration points are available for the Semantic Layer API.
+
+## dbt Cloud hosting and authentication
+
+To use the dbt Cloud APIs, you'll need access to the customer’s access urls. Depending on their dbt Cloud setup, they'll have a different access URL. To find out more, refer to [Regions & IP addresses](https://docs.getdbt.com/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own URL to simplify support.
+
+If the customer is on an Azure single tenant instance, they don't currently have access to the Discovery API or the Semantic Layer APIs.
+
+For authentication, we highly recommend that your integration uses account service tokens. You can read more about [how to create a service token and what permission sets to provide it](https://docs.getdbt.com/docs/dbt-cloud-apis/service-tokens). Please note that depending on their plan type, they'll have access to different permission sets. We _do not_ recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with.
+
+## Potential use cases
+
+- Event-based orchestration
+ - **Desired action** — You want to receive information that a scheduled dbt Cloud job has been completed or has kicked off a dbt Cloud job. You can align your product schedule to the dbt Cloud run schedule.
+ - **Examples** — Kicking off a dbt job after the ETL job of extracting and loading the data is completed. Or receiving a webhook after the job has been completed to kick off your reverse ETL job.
+ - **Integration points** — Webhooks and/or Admin API
+- dbt lineage
+ - **Desired action** — You want to interpolate the dbt lineage metadata into your tool.
+ - **Example** — In your tool, you want to pull in the dbt DAG into your lineage diagram. For details on what you could pull and how to do this, refer to [Use cases and examples for the Discovery API](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples).
+ - **Integration points** — Discovery API
+- dbt environment/job metadata
+ - **Desired action** — You want to interpolate the dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc.
+ - **Example** — In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. For details on what you could pull and how to do this, refer to [What's the latest state of each model](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model).
+ - **Integration points** — Discovery API
+- dbt model documentation
+ - **Desired action** — You want to interpolate the dbt project Information, including model descriptions, column descriptions, etc.
+ - **Example** — You want to extract the dbt model description so you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. For details on what you could pull and how to do this, refer to [What does this dataset and its columns mean](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean).
+ - **Integration points** — Discovery API
+
+dbt Core only users will have no access to the above integration points. For dbt metadata, oftentimes our partners will create a dbt Core integration by using the [dbt artifact](https://www.getdbt.com/product/semantic-layer/) files generated by each run and provided by the user. With the Discovery API, we are providing a dynamic way to get the latest information parsed out for you.
+
+## dbt Cloud plans & permissions
+
+[The dbt Cloud plan type](https://www.getdbt.com/pricing) will change what the user has access to. There are four different types of plans:
+
+- **Developer** — This is free and available to one user with a limited amount of successful models built. This plan can't access the APIs, Webhooks, or Semantic Layer and is limited to just one project.
+- **Team** — This plan provides access to the APIs, webhooks, and Semantic Layer. You can have up to eight users on the account and one dbt Cloud Project. This is limited to 15,000 successful models built.
+- **Enterprise** (multi-tenant/multi-cell) — This plan provides access to the APIs, webhooks, and Semantic Layer. You can have more than one dbt Cloud project based on how many dbt projects/domains they have using dbt. The majority of our enterprise customers are on multi-tenant dbt Cloud instances.
+- **Enterprise** (single tenant): This plan might have access to the APIs, webhooks, and Semantic Layer. If you're working with a specific customer, let us know and we can confirm if their instance has access.
+
+## FAQs
+
+- What is a dbt Cloud project?
+ - A dbt Cloud project is made up of two connections: one to the Git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud project in their account but there are enterprise clients who might have more depending on their use cases. The project also encapsulates two types of environments at minimal: a development environment and deployment environment.
+ - Folks commonly refer to the [dbt project](https://docs.getdbt.com/docs/build/projects) as the code hosted in their Git repository.
+- What is a dbt Cloud environment?
+ - For an overview, check out [About environments](https://docs.getdbt.com/docs/environments-in-dbt). At a minimum, a project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI.
+- Can we write back to the dbt project?
+ - At this moment, we don't have a Write API. A dbt project is hosted in a Git repository, so if you have a Git provider integration, you can manually open a pull request (PR) on the project to maintain the version control process.
+- Can you provide column-level information in the lineage?
+ - Column-level lineage is currently in beta release with more information to come.
+- How do I get a Partner Account?
+ - Contact your Partner Manager with your account ID (in your URL).
+- Why shouldn't I use the Admin API to pull out the dbt artifacts for metadata?
+ - We recommend not integrating with the Admin API to extract the dbt artifacts documentation. This is because the Discovery API provides more extensive information, a user-friendly structure, and a more reliable integration point.
+- How do I get access to the dbt brand assets?
+ - Check out our [Brand guidelines](https://www.getdbt.com/brand-guidelines/) page. Please make sure you’re not using our old logo (hint: there should only be one hole in the logo). Please also note that the name dbt and the dbt logo are trademarked by dbt Labs, and that use is governed by our brand guidelines, which are fairly specific for commercial uses. If you have any questions about proper use of our marks, please ask your partner manager.
+- How do I engage with the partnerships team?
+ - Email partnerships@dbtlabs.com.
\ No newline at end of file
diff --git a/website/blog/authors.yml b/website/blog/authors.yml
index cd2bd162935..a3548575b6e 100644
--- a/website/blog/authors.yml
+++ b/website/blog/authors.yml
@@ -1,6 +1,6 @@
amy_chen:
image_url: /img/blog/authors/achen.png
- job_title: Staff Partner Engineer
+ job_title: Product Ecosystem Manager
links:
- icon: fa-linkedin
url: https://www.linkedin.com/in/yuanamychen/
diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md
index ee3d4262882..e50542a446c 100644
--- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md
+++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md
@@ -2,6 +2,8 @@
title: "Intro to MetricFlow"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
+pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-2-setup"
+pagination_prev: null
---
Flying cars, hoverboards, and true self-service analytics: this is the future we were promised. The first two might still be a few years out, but real self-service analytics is here today. With dbt Cloud's Semantic Layer, you can resolve the tension between accuracy and flexibility that has hampered analytics tools for years, empowering everybody in your organization to explore a shared reality of metrics. Best of all for analytics engineers, building with these new tools will significantly [DRY](https://docs.getdbt.com/terms/dry) up and simplify your codebase. As you'll see, the deep interaction between your dbt models and the Semantic Layer make your dbt project the ideal place to craft your metrics.
diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md
index 6e9153a3780..470445891dc 100644
--- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md
+++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md
@@ -2,6 +2,7 @@
title: "Set up MetricFlow"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
+pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models"
---
## Getting started
@@ -13,9 +14,23 @@ git clone git@github.com:dbt-labs/jaffle-sl-template.git
cd path/to/project
```
-Next, before you start writing code, you need to install MetricFlow as an extension of a dbt adapter from PyPI (dbt Core users only). The MetricFlow is compatible with Python versions 3.8 through 3.11.
+Next, before you start writing code, you need to install MetricFlow:
-We'll use pip to install MetricFlow and our dbt adapter:
+
+
+
+
+- [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) — MetricFlow commands are embedded in the dbt Cloud CLI. You can immediately run them once you install the dbt Cloud CLI. Using dbt Cloud means you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning.
+
+- [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) — You can create metrics using MetricFlow in the dbt Cloud IDE. However, support for running MetricFlow commands in the IDE will be available soon.
+
+
+
+
+
+- Download MetricFlow as an extension of a dbt adapter from PyPI (dbt Core users only). The MetricFlow is compatible with Python versions 3.8 through 3.11.
+ - **Note**: You'll need to manage versioning between dbt Core, your adapter, and MetricFlow.
+- We'll use pip to install MetricFlow and our dbt adapter:
```shell
# activate a virtual environment for your project,
@@ -27,13 +42,16 @@ python -m pip install "dbt-metricflow[adapter name]"
# e.g. python -m pip install "dbt-metricflow[snowflake]"
```
-Lastly, to get to the pre-Semantic Layer starting state, checkout the `start-here` branch.
+
+
+
+- Now that you're ready to use MetricFlow, get to the pre-Semantic Layer starting state by checking out the `start-here` branch:
```shell
git checkout start-here
```
-For more information, refer to the [MetricFlow commands](/docs/build/metricflow-commands) or a [quickstart](/guides) to get more familiar with setting up a dbt project.
+For more information, refer to the [MetricFlow commands](/docs/build/metricflow-commands) or the [quickstart guides](/guides) to get more familiar with setting up a dbt project.
## Basic commands
diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md
index a2dc55e37ae..9c710b286ef 100644
--- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md
+++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md
@@ -2,6 +2,7 @@
title: "Building semantic models"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
+pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics"
---
## How to build a semantic model
diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md
index da83adbdc69..003eff9de40 100644
--- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md
+++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md
@@ -2,6 +2,7 @@
title: "Building metrics"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
+pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart"
---
## How to build metrics
diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md
index dfdba2941e9..9ae80cbcd29 100644
--- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md
+++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md
@@ -2,6 +2,7 @@
title: "Refactor an existing mart"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
+pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics"
---
## A new approach
diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md
index fe7438b5800..e5c6e452dac 100644
--- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md
+++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md
@@ -2,6 +2,7 @@
title: "More advanced metrics"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
+pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion"
---
## More advanced metric types
diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion.md
index a1062721177..1870b6b77e4 100644
--- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion.md
+++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion.md
@@ -2,6 +2,7 @@
title: "Best practices"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
+pagination_next: null
---
## Putting it all together
diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md
index 2a247263159..cc45290ae15 100644
--- a/website/docs/docs/build/incremental-models.md
+++ b/website/docs/docs/build/incremental-models.md
@@ -154,17 +154,21 @@ For detailed usage instructions, check out the [dbt run](/reference/commands/run
# Understanding incremental models
## When should I use an incremental model?
-It's often desirable to build models as tables in your data warehouse since downstream queries are more performant. While the `table` materialization also creates your models as tables, it rebuilds the table on each dbt run. These runs can become problematic in that they use a lot of compute when either:
-* source data tables have millions, or even billions, of rows.
-* the transformations on the source data are computationally expensive (that is, take a long time to execute), for example, complex Regex functions, or UDFs are being used to transform data.
-Like many things in programming, incremental models are a trade-off between complexity and performance. While they are not as straightforward as the `view` and `table` materializations, they can lead to significantly better performance of your dbt runs.
+Building models as tables in your data warehouse is often preferred for better query performance. However, using `table` materialization can be computationally intensive, especially when:
+
+- Source data has millions or billions of rows.
+- Data transformations on the source data are computationally expensive (take a long time to execute) and complex, like using Regex or UDFs.
+
+Incremental models offer a balance between complexity and improved performance compared to `view` and `table` materializations and offer better performance of your dbt runs.
+
+In addition to these considerations for incremental models, it's important to understand their limitations and challenges, particularly with large datasets. For more insights into efficient strategies, performance considerations, and the handling of late-arriving data in incremental models, refer to the [On the Limits of Incrementality](https://discourse.getdbt.com/t/on-the-limits-of-incrementality/303) discourse discussion.
## Understanding the is_incremental() macro
The `is_incremental()` macro will return `True` if _all_ of the following conditions are met:
* the destination table already exists in the database
* dbt is _not_ running in full-refresh mode
-* the running model is configured with `materialized='incremental'`
+* The running model is configured with `materialized='incremental'`
Note that the SQL in your model needs to be valid whether `is_incremental()` evaluates to `True` or `False`.
diff --git a/website/docs/docs/build/metricflow-commands.md b/website/docs/docs/build/metricflow-commands.md
index e3bb93da964..a0964269e68 100644
--- a/website/docs/docs/build/metricflow-commands.md
+++ b/website/docs/docs/build/metricflow-commands.md
@@ -17,15 +17,16 @@ MetricFlow is compatible with Python versions 3.8, 3.9, 3.10, and 3.11.
MetricFlow is a dbt package that allows you to define and query metrics in your dbt project. You can use MetricFlow to query metrics in your dbt project in the dbt Cloud CLI, dbt Cloud IDE, or dbt Core.
-**Note** — MetricFlow commands aren't supported in dbt Cloud jobs yet. However, you can add MetricFlow validations with your git provider (such as GitHub Actions) by installing MetricFlow (`python -m pip install metricflow`). This allows you to run MetricFlow commands as part of your continuous integration checks on PRs.
+Using MetricFlow with dbt Cloud means you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning.
+
+**dbt Cloud jobs** — MetricFlow commands aren't supported in dbt Cloud jobs yet. However, you can add MetricFlow validations with your git provider (such as GitHub Actions) by installing MetricFlow (`python -m pip install metricflow`). This allows you to run MetricFlow commands as part of your continuous integration checks on PRs.
-MetricFlow commands are embedded in the dbt Cloud CLI, which means you can immediately run them once you install the dbt Cloud CLI.
-
-A benefit to using the dbt Cloud is that you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning.
+- MetricFlow commands are embedded in the dbt Cloud CLI. This means you can immediately run them once you install the dbt Cloud CLI and don't need to install MetricFlow separately.
+- You don't need to manage versioning — your dbt Cloud account will automatically manage the versioning for you.
@@ -35,7 +36,7 @@ A benefit to using the dbt Cloud is that you won't need to manage versioning &md
You can create metrics using MetricFlow in the dbt Cloud IDE. However, support for running MetricFlow commands in the IDE will be available soon.
:::
-A benefit to using the dbt Cloud is that you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning.
+
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md
index 5f1c4cae725..c265529fb49 100644
--- a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md
+++ b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md
@@ -42,10 +42,12 @@ alter user jsmith set rsa_public_key='MIIBIjANBgkqh...';
```
2. Finally, set the **Private Key** and **Private Key Passphrase** fields in the **Credentials** page to finish configuring dbt Cloud to authenticate with Snowflake using a key pair.
-
- **Note:** At this time ONLY Encrypted Private Keys are supported by dbt Cloud, and the keys must be of size 4096 or smaller.
-3. To successfully fill in the Private Key field, you **must** include commented lines when you add the passphrase. Leaving the **Private Key Passphrase** field empty will return an error. If you're receiving a `Could not deserialize key data` or `JWT token` error, refer to [Troubleshooting](#troubleshooting) for more info.
+**Note:** Unencrypted private keys are permitted. Use a passphrase only if needed.
+As of dbt version 1.5.0, you can use a `private_key` string in place of `private_key_path`. This `private_key` string can be either Base64-encoded DER format for the key bytes or plain-text PEM format. For more details on key generation, refer to the [Snowflake documentation](https://community.snowflake.com/s/article/How-to-configure-Snowflake-key-pair-authentication-fields-in-dbt-connection).
+
+
+4. To successfully fill in the Private Key field, you _must_ include commented lines. If you receive a `Could not deserialize key data` or `JWT token` error, refer to [Troubleshooting](#troubleshooting) for more info.
**Example:**
diff --git a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md
index 121cab68ce7..61fe47a235a 100644
--- a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md
+++ b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md
@@ -13,14 +13,14 @@ Use this dbt Cloud IDE page to help you quickly reference some common operation
|--------|----------------|------------------|
| View a full list of editor shortcuts | Fn-F1 | Fn-F1 |
| Select a file to open | Command-O | Control-O |
-| Open the command palette to invoke dbt commands and actions | Command-P or Command-Shift-P | Control-P or Control-Shift-P |
-| Multi-edit by selecting multiple lines | Option-click or Shift-Option-Command | Hold Alt and click |
+| Close currently active editor tab | Option-W | Alt-W |
| Preview code | Command-Enter | Control-Enter |
| Compile code | Command-Shift-Enter | Control-Shift-Enter |
-| Reveal a list of dbt functions | Enter two underscores `__` | Enter two underscores `__` |
-| Toggle open the [Invocation history drawer](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located on the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) |
-| Add a block comment to selected code. SQL files will use the Jinja syntax `({# #})` rather than the SQL one `(/* */)`.
Markdown files will use the Markdown syntax `()` | Command-Option-/ | Control-Alt-/ |
-| Close the currently active editor tab | Option-W | Alt-W |
+| Reveal a list of dbt functions in the editor | Enter two underscores `__` | Enter two underscores `__` |
+| Open the command palette to invoke dbt commands and actions | Command-P / Command-Shift-P | Control-P / Control-Shift-P |
+| Multi-edit in the editor by selecting multiple lines | Option-Click / Shift-Option-Command / Shift-Option-Click | Hold Alt and Click |
+| Open the [**Invocation History Drawer**](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located at the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) |
+| Add a block comment to the selected code. SQL files will use the Jinja syntax `({# #})` rather than the SQL one `(/* */)`.
Markdown files will use the Markdown syntax `()` | Command-Option-/ | Control-Alt-/ |
## Related docs
diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md
new file mode 100644
index 00000000000..0c43a287bbe
--- /dev/null
+++ b/website/docs/docs/cloud/migration.md
@@ -0,0 +1,45 @@
+---
+title: "Multi-cell migration checklist"
+id: migration
+description: "Prepare for account migration to AWS cell-based architecture."
+pagination_next: null
+pagination_prev: null
+---
+
+dbt Labs is in the process of migrating dbt Cloud to a new _cell-based architecture_. This architecture will be the foundation of dbt Cloud for years to come, and will bring improved scalability, reliability, and security to all customers and users of dbt Cloud.
+
+There is some preparation required to ensure a successful migration.
+
+Migrations are being scheduled on a per-account basis. _If you haven't received any communication (either with a banner or by email) about a migration date, you don't need to take any action at this time._ dbt Labs will share migration date information with you, with appropriate advance notice, before we complete any migration steps in the dbt Cloud backend.
+
+This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access.
+
+## Pre-migration checklist
+
+Prior to your migration date, your dbt Cloud account admin will need to make some changes to your account.
+
+If your account is scheduled for migration, you will see a banner indicating your migration date when you log in. If you don't see a banner, you don't need to take any action.
+
+1. **IP addresses** — dbt Cloud will be using new IPs to access your warehouse after the migration. Make sure to allow inbound traffic from these IPs in your firewall and include it in any database grants. All six of the IPs below should be added to allowlists.
+ * Old IPs: `52.45.144.63`, `54.81.134.249`, `52.22.161.231`
+ * New IPs: `52.3.77.232`, `3.214.191.130`, `34.233.79.135`
+2. **APIs and integrations** — Each dbt Cloud account will be allocated a static access URL like: `aa000.us1.dbt.com`. You should begin migrating your API access and partner integrations to use the new static subdomain as soon as possible. You can find your access URL on:
+ * Any page where you generate or manage API tokens.
+ * The **Account Settings** > **Account page**.
+
+ :::important Multiple account access
+ Be careful, each account that you have access to will have a different, dedicated [access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account).
+ :::
+
+3. **IDE sessions** — Any uncommitted changes in the IDE might be lost during the migration process. dbt Labs _strongly_ encourages you to commit all changes in the IDE before your scheduled migration time.
+4. **User invitations** — Any pending user invitations will be invalidated during the migration. You can resend the invitations once the migration is complete.
+5. **Git integrations** — Integrations with GitHub, GitLab, and Azure DevOps will need to be manually updated. dbt Labs will not be migrating any accounts using these integrations at this time. If you're using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration.
+6. **SSO integrations** — Integrations with SSO identity providers (IdPs) will need to be manually updated. dbt Labs will not be migrating any accounts using SSO at this time. If you're using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration.
+
+## Post-migration
+
+After migration, if you completed all the [Pre-migration checklist](#pre-migration-checklist) items, your dbt Cloud resources and jobs will continue to work as they did before.
+
+You have the option to log in to dbt Cloud at a different URL:
+ * If you were previously logging in at `cloud.getdbt.com`, you should instead plan to login at `us1.dbt.com`. The original URL will still work, but you’ll have to click through to be redirected upon login.
+ * You may also log in directly with your account’s unique [access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account).
\ No newline at end of file
diff --git a/website/docs/docs/core/connect-data-platform/snowflake-setup.md b/website/docs/docs/core/connect-data-platform/snowflake-setup.md
index 2b426ef667b..2ab5e64e36a 100644
--- a/website/docs/docs/core/connect-data-platform/snowflake-setup.md
+++ b/website/docs/docs/core/connect-data-platform/snowflake-setup.md
@@ -98,7 +98,8 @@ Along with adding the `authenticator` parameter, be sure to run `alter account s
### Key Pair Authentication
-To use key pair authentication, omit a `password` and instead provide a `private_key_path` and, optionally, a `private_key_passphrase` in your target. **Note:** Versions of dbt before 0.16.0 required that private keys were encrypted and a `private_key_passphrase` was provided. This behavior was changed in dbt v0.16.0.
+To use key pair authentication, skip the `password` and provide a `private_key_path`. If needed, you can also add a `private_key_passphrase`.
+**Note**: Unencrypted private keys are accepted, so add a passphrase only if necessary.
Starting from [dbt v1.5.0](/docs/dbt-versions/core), you have the option to use a `private_key` string instead of a `private_key_path`. The `private_key` string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to [Snowflake documentation](https://docs.snowflake.com/developer-guide/python-connector/python-connector-example#using-key-pair-authentication-key-pair-rotation) for more info on how they generate the key.
diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md
index 93595cea3f6..9d9e0c9d5fb 100644
--- a/website/docs/docs/core/connect-data-platform/spark-setup.md
+++ b/website/docs/docs/core/connect-data-platform/spark-setup.md
@@ -20,10 +20,6 @@ meta:
-:::note
-See [Databricks setup](#databricks-setup) for the Databricks version of this page.
-:::
-
import SetUpPages from '/snippets/_setup-pages-intro.md';
@@ -204,6 +200,7 @@ connect_retries: 3
+
### Server side configuration
Spark can be customized using [Application Properties](https://spark.apache.org/docs/latest/configuration.html). Using these properties the execution can be customized, for example, to allocate more memory to the driver process. Also, the Spark SQL runtime can be set through these properties. For example, this allows the user to [set a Spark catalogs](https://spark.apache.org/docs/latest/configuration.html#spark-sql).
diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md
index 1a30a1a4a54..4f467968716 100644
--- a/website/docs/docs/core/connect-data-platform/teradata-setup.md
+++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md
@@ -38,6 +38,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md';
|1.4.x.x | ❌ | ✅ | ✅ | ✅ | ✅ | ✅
|1.5.x | ❌ | ✅ | ✅ | ✅ | ✅ | ✅
|1.6.x | ❌ | ❌ | ✅ | ✅ | ✅ | ✅
+|1.7.x | ❌ | ❌ | ✅ | ✅ | ✅ | ✅
## dbt dependent packages version compatibility
@@ -45,6 +46,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md';
|--------------|------------|-------------------|----------------|
| 1.2.x | 1.2.x | 0.1.0 | 0.9.x or below |
| 1.6.7 | 1.6.7 | 1.1.1 | 1.1.1 |
+| 1.7.0 | 1.7.3 | 1.1.1 | 1.1.1 |
### Connecting to Teradata
@@ -172,6 +174,8 @@ For using cross DB macros, teradata-utils as a macro namespace will not be used,
| Cross-database macros | type_string | :white_check_mark: | custom macro provided |
| Cross-database macros | last_day | :white_check_mark: | no customization needed, see [compatibility note](#last_day) |
| Cross-database macros | width_bucket | :white_check_mark: | no customization
+| Cross-database macros | generate_series | :white_check_mark: | custom macro provided
+| Cross-database macros | date_spine | :white_check_mark: | no customization
#### examples for cross DB macros
diff --git a/website/docs/docs/core/connect-data-platform/trino-setup.md b/website/docs/docs/core/connect-data-platform/trino-setup.md
index a7dc658358f..4caa56dcb00 100644
--- a/website/docs/docs/core/connect-data-platform/trino-setup.md
+++ b/website/docs/docs/core/connect-data-platform/trino-setup.md
@@ -4,7 +4,7 @@ description: "Read this guide to learn about the Starburst/Trino warehouse setup
id: "trino-setup"
meta:
maintained_by: Starburst Data, Inc.
- authors: Marius Grama, Przemek Denkiewicz, Michiel de Smet
+ authors: Marius Grama, Przemek Denkiewicz, Michiel de Smet, Damian Owsianny
github_repo: 'starburstdata/dbt-trino'
pypi_package: 'dbt-trino'
min_core_version: 'v0.20.0'
@@ -30,7 +30,7 @@ The parameters for setting up a connection are for Starburst Enterprise, Starbur
## Host parameters
-The following profile fields are always required except for `user`, which is also required unless you're using the `oauth`, `cert`, or `jwt` authentication methods.
+The following profile fields are always required except for `user`, which is also required unless you're using the `oauth`, `oauth_console`, `cert`, or `jwt` authentication methods.
| Field | Example | Description |
| --------- | ------- | ----------- |
@@ -71,6 +71,7 @@ The authentication methods that dbt Core supports are:
- `jwt` — JSON Web Token (JWT)
- `certificate` — Certificate-based authentication
- `oauth` — Open Authentication (OAuth)
+- `oauth_console` — Open Authentication (OAuth) with authentication URL printed to the console
- `none` — None, no authentication
Set the `method` field to the authentication method you intend to use for the connection. For a high-level introduction to authentication in Trino, see [Trino Security: Authentication types](https://trino.io/docs/current/security/authentication-types.html).
@@ -85,6 +86,7 @@ Click on one of these authentication methods for further details on how to confi
{label: 'JWT', value: 'jwt'},
{label: 'Certificate', value: 'certificate'},
{label: 'OAuth', value: 'oauth'},
+ {label: 'OAuth (console)', value: 'oauth_console'},
{label: 'None', value: 'none'},
]}
>
@@ -269,7 +271,36 @@ sandbox-galaxy:
host: bunbundersders.trino.galaxy-dev.io
catalog: dbt_target
schema: dataders
- port: 433
+ port: 443
+```
+
+
+
+
+
+The only authentication parameter to set for OAuth 2.0 is `method: oauth_console`. If you're using Starburst Enterprise or Starburst Galaxy, you must enable OAuth 2.0 in Starburst before you can use this authentication method.
+
+For more information, refer to both [OAuth 2.0 authentication](https://trino.io/docs/current/security/oauth2.html) in the Trino docs and the [README](https://github.com/trinodb/trino-python-client#oauth2-authentication) for the Trino Python client.
+
+The only difference between `oauth_console` and `oauth` is:
+- `oauth` — An authentication URL automatically opens in a browser.
+- `oauth_console` — A URL is printed to the console.
+
+It's recommended that you install `keyring` to cache the OAuth 2.0 token over multiple dbt invocations by running `python -m pip install 'trino[external-authentication-token-cache]'`. The `keyring` package is not installed by default.
+
+#### Example profiles.yml for OAuth
+
+```yaml
+sandbox-galaxy:
+ target: oauth_console
+ outputs:
+ oauth:
+ type: trino
+ method: oauth_console
+ host: bunbundersders.trino.galaxy-dev.io
+ catalog: dbt_target
+ schema: dataders
+ port: 443
```
diff --git a/website/docs/docs/core/connect-data-platform/vertica-setup.md b/website/docs/docs/core/connect-data-platform/vertica-setup.md
index b1424289137..8e499d68b3e 100644
--- a/website/docs/docs/core/connect-data-platform/vertica-setup.md
+++ b/website/docs/docs/core/connect-data-platform/vertica-setup.md
@@ -6,7 +6,7 @@ meta:
authors: 'Vertica (Former authors: Matthew Carter, Andy Regan, Andrew Hedengren)'
github_repo: 'vertica/dbt-vertica'
pypi_package: 'dbt-vertica'
- min_core_version: 'v1.6.0 and newer'
+ min_core_version: 'v1.7.0'
cloud_support: 'Not Supported'
min_supported_version: 'Vertica 23.4.0'
slack_channel_name: 'n/a'
@@ -46,10 +46,12 @@ your-profile:
username: [your username]
password: [your password]
database: [database name]
+ oauth_access_token: [access token]
schema: [dbt schema]
connection_load_balance: True
backup_server_node: [list of backup hostnames or IPs]
retries: [1 or more]
+
threads: [1 or more]
target: dev
```
@@ -70,6 +72,7 @@ your-profile:
| username | The username to use to connect to the server. | Yes | None | dbadmin|
password |The password to use for authenticating to the server. |Yes|None|my_password|
database |The name of the database running on the server. |Yes | None | my_db |
+| oauth_access_token | To authenticate via OAuth, provide an OAuth Access Token that authorizes a user to the database. | No | "" | Default: "" |
schema| The schema to build models into.| No| None |VMart|
connection_load_balance| A Boolean value that indicates whether the connection can be redirected to a host in the database other than host.| No| True |True|
backup_server_node| List of hosts to connect to if the primary host specified in the connection (host, port) is unreachable. Each item in the list should be either a host string (using default port 5433) or a (host, port) tuple. A host can be a host name or an IP address.| No| None |['123.123.123.123','www.abc.com',('123.123.123.124',5433)]|
diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md
new file mode 100644
index 00000000000..401b43fb333
--- /dev/null
+++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md
@@ -0,0 +1,22 @@
+---
+title: "dbt Semantic Layer updates for December 2023"
+description: "December 2023: Enhanced Tableau integration, BIGINT support, LookML to MetricFlow conversion, and deprecation of legacy features."
+sidebar_label: "Update and fixes: dbt Semantic Layer"
+sidebar_position: 08
+date: 2023-12-22
+---
+The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer. The following list explains the updates and fixes for December 2023 in more detail.
+
+## Bug fixes
+
+- Tableau integration — The dbt Semantic Layer integration with Tableau now supports queries that resolve to a "NOT IN" clause. This applies to using "exclude" in the filtering user interface. Previously it wasn’t supported.
+- `BIGINT` support — The dbt Semantic Layer can now support `BIGINT` values with precision greater than 18. Previously it would return an error.
+- Memory leak — Fixed a memory leak in the JDBC API that would previously lead to intermittent errors when querying it.
+- Data conversion support — Added support for converting various Redshift and Postgres-specific data types. Previously, the driver would throw an error when encountering columns with those types.
+
+
+## Improvements
+
+- Deprecation — We deprecated [dbt Metrics and the legacy dbt Semantic Layer](/docs/dbt-versions/release-notes/Dec-2023/legacy-sl), both supported on dbt version 1.5 or lower. This change came into effect on December 15th, 2023.
+- Improved dbt converter tool — The [dbt converter tool](https://github.com/dbt-labs/dbt-converter) can now help automate some of the work in converting from LookML (Looker's modeling language) for those who are migrating. Previously this wasn’t available.
+
diff --git a/website/docs/reference/resource-configs/trino-configs.md b/website/docs/reference/resource-configs/trino-configs.md
index 21df13feac4..9ee62959f76 100644
--- a/website/docs/reference/resource-configs/trino-configs.md
+++ b/website/docs/reference/resource-configs/trino-configs.md
@@ -97,8 +97,9 @@ The `dbt-trino` adapter supports these modes in `table` materialization, which y
- `rename` — Creates an intermediate table, renames the target table to the backup one, and renames the intermediate table to the target one.
- `drop` — Drops and re-creates a table. This overcomes the table rename limitation in AWS Glue.
+- `replace` — Replaces a table using CREATE OR REPLACE clause. Support for table replacement varies across connectors. Refer to the connector documentation for details.
-The recommended `table` materialization uses `on_table_exists = 'rename'` and is also the default. You can change this default configuration by editing _one_ of these files:
+If CREATE OR REPLACE is supported in underlying connector, `replace` is recommended option. Otherwise, the recommended `table` materialization uses `on_table_exists = 'rename'` and is also the default. You can change this default configuration by editing _one_ of these files:
- the SQL file for your model
- the `dbt_project.yml` configuration file
diff --git a/website/sidebars.js b/website/sidebars.js
index a82b2e06ec2..6bb630037c1 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -135,7 +135,6 @@ const sidebarSettings = {
"docs/cloud/secure/redshift-privatelink",
"docs/cloud/secure/postgres-privatelink",
"docs/cloud/secure/vcs-privatelink",
- "docs/cloud/secure/ip-restrictions",
],
}, // PrivateLink
"docs/cloud/billing",
@@ -1028,6 +1027,8 @@ const sidebarSettings = {
id: "best-practices/how-we-build-our-metrics/semantic-layer-1-intro",
},
items: [
+ "best-practices/how-we-build-our-metrics/semantic-layer-1-intro",
+ "best-practices/how-we-build-our-metrics/semantic-layer-2-setup",
"best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models",
"best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics",
"best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart",
diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md
index 6e096b83750..6b6eb1c2761 100644
--- a/website/snippets/_cloud-environments-info.md
+++ b/website/snippets/_cloud-environments-info.md
@@ -42,6 +42,13 @@ For improved reliability and performance on your job runs, you can enable dbt Cl
dbt Cloud caches your project's Git repo after each successful run and retains it for 8 days if there are no repo updates. It caches all packages regardless of installation method and does not fetch code outside of the job runs.
+dbt Cloud will use the cached copy of your project's Git repo under these circumstances:
+
+- Outages from third-party services (for example, the [dbt package hub](https://hub.getdbt.com/)).
+- Git authentication fails.
+- There are syntax errors in the `packages.yml` file. You can set up and use [continuous integration (CI)](/docs/deploy/continuous-integration) to find these errors sooner.
+- If a package doesn't work with the current dbt version. You can set up and use [continuous integration (CI)](/docs/deploy/continuous-integration) to identify this issue sooner.
+
To enable Git repository caching, select **Account settings** from the gear menu and enable the **Repository caching** option.
diff --git a/website/snippets/dbt-databricks-for-databricks.md b/website/snippets/dbt-databricks-for-databricks.md
index f1c5ec84af1..1e18da33d42 100644
--- a/website/snippets/dbt-databricks-for-databricks.md
+++ b/website/snippets/dbt-databricks-for-databricks.md
@@ -1,4 +1,5 @@
:::info If you're using Databricks, use `dbt-databricks`
-If you're using Databricks, the `dbt-databricks` adapter is recommended over `dbt-spark`.
-If you're still using dbt-spark with Databricks consider [migrating from the dbt-spark adapter to the dbt-databricks adapter](/guides/migrate-from-spark-to-databricks).
+If you're using Databricks, the `dbt-databricks` adapter is recommended over `dbt-spark`. If you're still using dbt-spark with Databricks consider [migrating from the dbt-spark adapter to the dbt-databricks adapter](/guides/migrate-from-spark-to-databricks).
+
+For the Databricks version of this page, refer to [Databricks setup](#databricks-setup).
:::
diff --git a/website/snippets/warehouse-setups-cloud-callout.md b/website/snippets/warehouse-setups-cloud-callout.md
index 3bc1147a637..56edd3a96ea 100644
--- a/website/snippets/warehouse-setups-cloud-callout.md
+++ b/website/snippets/warehouse-setups-cloud-callout.md
@@ -1,3 +1,3 @@
-:::info `profiles.yml` file is for CLI users only
-If you're using dbt Cloud, you don't need to create a `profiles.yml` file. This file is only for CLI users. To connect your data platform to dbt Cloud, refer to [About data platforms](/docs/cloud/connect-data-platform/about-connections).
+:::info `profiles.yml` file is for dbt Core users only
+If you're using dbt Cloud, you don't need to create a `profiles.yml` file. This file is only for dbt Core users. To connect your data platform to dbt Cloud, refer to [About data platforms](/docs/cloud/connect-data-platform/about-connections).
:::