Skip to content

Commit

Permalink
Merge branch 'current' into partner_integration_guide
Browse files Browse the repository at this point in the history
  • Loading branch information
mirnawong1 authored Dec 22, 2023
2 parents 13d1fcf + 9924d72 commit c215a93
Show file tree
Hide file tree
Showing 15 changed files with 130 additions and 15 deletions.
2 changes: 1 addition & 1 deletion website/blog/2023-12-20-partner-integration-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Here I'll cover how to get started, potential use cases you want to solve for, a

## New to dbt Cloud?

If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](https://docs.getdbt.com/guides) after reading [What is dbt?](https://docs.getdbt.com/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration.
If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](https://docs.getdbt.com/guides) after reading [What is dbt](https://docs.getdbt.com/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration.

If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. This account may only be used for development, training, and demonstration purposes. Please contact your partner manager if you're interested and provide the account ID (provided in the URL). Our partner account includes all of the enterprise level functionality and can be provided with a signed partnerships agreement.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
title: "Intro to MetricFlow"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-2-setup"
pagination_prev: null
---

Flying cars, hoverboards, and true self-service analytics: this is the future we were promised. The first two might still be a few years out, but real self-service analytics is here today. With dbt Cloud's Semantic Layer, you can resolve the tension between accuracy and flexibility that has hampered analytics tools for years, empowering everybody in your organization to explore a shared reality of metrics. Best of all for analytics engineers, building with these new tools will significantly [DRY](https://docs.getdbt.com/terms/dry) up and simplify your codebase. As you'll see, the deep interaction between your dbt models and the Semantic Layer make your dbt project the ideal place to craft your metrics.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
title: "Set up MetricFlow"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models"
---

## Getting started
Expand All @@ -13,9 +14,23 @@ git clone [email protected]:dbt-labs/jaffle-sl-template.git
cd path/to/project
```

Next, before you start writing code, you need to install MetricFlow as an extension of a dbt adapter from PyPI (dbt Core users only). The MetricFlow is compatible with Python versions 3.8 through 3.11.
Next, before you start writing code, you need to install MetricFlow:

We'll use pip to install MetricFlow and our dbt adapter:
<Tabs>

<TabItem value="cloud" label="dbt Cloud">

- [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) &mdash; MetricFlow commands are embedded in the dbt Cloud CLI. You can immediately run them once you install the dbt Cloud CLI. Using dbt Cloud means you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning.

- [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) &mdash; You can create metrics using MetricFlow in the dbt Cloud IDE. However, support for running MetricFlow commands in the IDE will be available soon.

</TabItem>

<TabItem value="core" label="dbt Core">

- Download MetricFlow as an extension of a dbt adapter from PyPI (dbt Core users only). The MetricFlow is compatible with Python versions 3.8 through 3.11.
- **Note**: You'll need to manage versioning between dbt Core, your adapter, and MetricFlow.
- We'll use pip to install MetricFlow and our dbt adapter:

```shell
# activate a virtual environment for your project,
Expand All @@ -27,13 +42,16 @@ python -m pip install "dbt-metricflow[adapter name]"
# e.g. python -m pip install "dbt-metricflow[snowflake]"
```

Lastly, to get to the pre-Semantic Layer starting state, checkout the `start-here` branch.
</TabItem>
</Tabs>

- Now that you're ready to use MetricFlow, get to the pre-Semantic Layer starting state by checking out the `start-here` branch:

```shell
git checkout start-here
```

For more information, refer to the [MetricFlow commands](/docs/build/metricflow-commands) or a [quickstart](/guides) to get more familiar with setting up a dbt project.
For more information, refer to the [MetricFlow commands](/docs/build/metricflow-commands) or the [quickstart guides](/guides) to get more familiar with setting up a dbt project.

## Basic commands

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
title: "Building semantic models"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics"
---

## How to build a semantic model
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
title: "Building metrics"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart"
---

## How to build metrics
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
title: "Refactor an existing mart"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics"
---

## A new approach
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
title: "More advanced metrics"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion"
---

## More advanced metric types
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
title: "Best practices"
description: Getting started with the dbt and MetricFlow
hoverSnippet: Learn how to get started with the dbt and MetricFlow
pagination_next: null
---

## Putting it all together
Expand Down
11 changes: 6 additions & 5 deletions website/docs/docs/build/metricflow-commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,15 +17,16 @@ MetricFlow is compatible with Python versions 3.8, 3.9, 3.10, and 3.11.

MetricFlow is a dbt package that allows you to define and query metrics in your dbt project. You can use MetricFlow to query metrics in your dbt project in the dbt Cloud CLI, dbt Cloud IDE, or dbt Core.

**Note** &mdash; MetricFlow commands aren't supported in dbt Cloud jobs yet. However, you can add MetricFlow validations with your git provider (such as GitHub Actions) by installing MetricFlow (`python -m pip install metricflow`). This allows you to run MetricFlow commands as part of your continuous integration checks on PRs.
Using MetricFlow with dbt Cloud means you won't need to manage versioning &mdash; your dbt Cloud account will automatically manage the versioning.

**dbt Cloud jobs** &mdash; MetricFlow commands aren't supported in dbt Cloud jobs yet. However, you can add MetricFlow validations with your git provider (such as GitHub Actions) by installing MetricFlow (`python -m pip install metricflow`). This allows you to run MetricFlow commands as part of your continuous integration checks on PRs.

<Tabs>

<TabItem value="cloudcli" label="dbt Cloud CLI">

MetricFlow commands are embedded in the dbt Cloud CLI, which means you can immediately run them once you install the dbt Cloud CLI.

A benefit to using the dbt Cloud is that you won't need to manage versioning &mdash; your dbt Cloud account will automatically manage the versioning.
- MetricFlow commands are embedded in the dbt Cloud CLI. This means you can immediately run them once you install the dbt Cloud CLI and don't need to install MetricFlow separately.
- You don't need to manage versioning &mdash; your dbt Cloud account will automatically manage the versioning for you.

</TabItem>

Expand All @@ -35,7 +36,7 @@ A benefit to using the dbt Cloud is that you won't need to manage versioning &md
You can create metrics using MetricFlow in the dbt Cloud IDE. However, support for running MetricFlow commands in the IDE will be available soon.
:::

A benefit to using the dbt Cloud is that you won't need to manage versioning &mdash; your dbt Cloud account will automatically manage the versioning.


</TabItem>

Expand Down
45 changes: 45 additions & 0 deletions website/docs/docs/cloud/migration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
title: "Multi-cell migration checklist"
id: migration
description: "Prepare for account migration to AWS cell-based architecture."
pagination_next: null
pagination_prev: null
---

dbt Labs is in the process of migrating dbt Cloud to a new _cell-based architecture_. This architecture will be the foundation of dbt Cloud for years to come, and will bring improved scalability, reliability, and security to all customers and users of dbt Cloud.

There is some preparation required to ensure a successful migration.

Migrations are being scheduled on a per-account basis. _If you haven't received any communication (either with a banner or by email) about a migration date, you don't need to take any action at this time._ dbt Labs will share migration date information with you, with appropriate advance notice, before we complete any migration steps in the dbt Cloud backend.

This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access.

## Pre-migration checklist

Prior to your migration date, your dbt Cloud account admin will need to make some changes to your account.

If your account is scheduled for migration, you will see a banner indicating your migration date when you log in. If you don't see a banner, you don't need to take any action.

1. **IP addresses** &mdash; dbt Cloud will be using new IPs to access your warehouse after the migration. Make sure to allow inbound traffic from these IPs in your firewall and include it in any database grants. All six of the IPs below should be added to allowlists.
* Old IPs: `52.45.144.63``54.81.134.249``52.22.161.231`
* New IPs: `52.3.77.232``3.214.191.130``34.233.79.135`
2. **APIs and integrations** &mdash; Each dbt Cloud account will be allocated a static access URL like: `aa000.us1.dbt.com`. You should begin migrating your API access and partner integrations to use the new static subdomain as soon as possible. You can find your access URL on:
* Any page where you generate or manage API tokens.
* The **Account Settings** > **Account page**.

:::important Multiple account access
Be careful, each account that you have access to will have a different, dedicated [access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account).
:::

3. **IDE sessions** &mdash; Any uncommitted changes in the IDE might be lost during the migration process. dbt Labs _strongly_ encourages you to commit all changes in the IDE before your scheduled migration time.
4. **User invitations** &mdash; Any pending user invitations will be invalidated during the migration. You can resend the invitations once the migration is complete.
5. **Git integrations** &mdash; Integrations with GitHub, GitLab, and Azure DevOps will need to be manually updated. dbt Labs will not be migrating any accounts using these integrations at this time. If you're using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration.
6. **SSO integrations** &mdash; Integrations with SSO identity providers (IdPs) will need to be manually updated. dbt Labs will not be migrating any accounts using SSO at this time. If you're using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration.

## Post-migration

After migration, if you completed all the [Pre-migration checklist](#pre-migration-checklist) items, your dbt Cloud resources and jobs will continue to work as they did before.

You have the option to log in to dbt Cloud at a different URL:
* If you were previously logging in at `cloud.getdbt.com`, you should instead plan to login at `us1.dbt.com`. The original URL will still work, but you’ll have to click through to be redirected upon login.
* You may also log in directly with your account’s unique [access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account).
37 changes: 34 additions & 3 deletions website/docs/docs/core/connect-data-platform/trino-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: "Read this guide to learn about the Starburst/Trino warehouse setup
id: "trino-setup"
meta:
maintained_by: Starburst Data, Inc.
authors: Marius Grama, Przemek Denkiewicz, Michiel de Smet
authors: Marius Grama, Przemek Denkiewicz, Michiel de Smet, Damian Owsianny
github_repo: 'starburstdata/dbt-trino'
pypi_package: 'dbt-trino'
min_core_version: 'v0.20.0'
Expand All @@ -30,7 +30,7 @@ The parameters for setting up a connection are for Starburst Enterprise, Starbur

## Host parameters

The following profile fields are always required except for `user`, which is also required unless you're using the `oauth`, `cert`, or `jwt` authentication methods.
The following profile fields are always required except for `user`, which is also required unless you're using the `oauth`, `oauth_console`, `cert`, or `jwt` authentication methods.

| Field | Example | Description |
| --------- | ------- | ----------- |
Expand Down Expand Up @@ -71,6 +71,7 @@ The authentication methods that dbt Core supports are:
- `jwt` &mdash; JSON Web Token (JWT)
- `certificate` &mdash; Certificate-based authentication
- `oauth` &mdash; Open Authentication (OAuth)
- `oauth_console` &mdash; Open Authentication (OAuth) with authentication URL printed to the console
- `none` &mdash; None, no authentication

Set the `method` field to the authentication method you intend to use for the connection. For a high-level introduction to authentication in Trino, see [Trino Security: Authentication types](https://trino.io/docs/current/security/authentication-types.html).
Expand All @@ -85,6 +86,7 @@ Click on one of these authentication methods for further details on how to confi
{label: 'JWT', value: 'jwt'},
{label: 'Certificate', value: 'certificate'},
{label: 'OAuth', value: 'oauth'},
{label: 'OAuth (console)', value: 'oauth_console'},
{label: 'None', value: 'none'},
]}
>
Expand Down Expand Up @@ -269,7 +271,36 @@ sandbox-galaxy:
host: bunbundersders.trino.galaxy-dev.io
catalog: dbt_target
schema: dataders
port: 433
port: 443
```

</TabItem>

<TabItem value="oauth_console">

The only authentication parameter to set for OAuth 2.0 is `method: oauth_console`. If you're using Starburst Enterprise or Starburst Galaxy, you must enable OAuth 2.0 in Starburst before you can use this authentication method.

For more information, refer to both [OAuth 2.0 authentication](https://trino.io/docs/current/security/oauth2.html) in the Trino docs and the [README](https://github.com/trinodb/trino-python-client#oauth2-authentication) for the Trino Python client.

The only difference between `oauth_console` and `oauth` is:
- `oauth` &mdash; An authentication URL automatically opens in a browser.
- `oauth_console` &mdash; A URL is printed to the console.

It's recommended that you install `keyring` to cache the OAuth 2.0 token over multiple dbt invocations by running `python -m pip install 'trino[external-authentication-token-cache]'`. The `keyring` package is not installed by default.

#### Example profiles.yml for OAuth

```yaml
sandbox-galaxy:
target: oauth_console
outputs:
oauth:
type: trino
method: oauth_console
host: bunbundersders.trino.galaxy-dev.io
catalog: dbt_target
schema: dataders
port: 443
```

</TabItem>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ meta:
authors: 'Vertica (Former authors: Matthew Carter, Andy Regan, Andrew Hedengren)'
github_repo: 'vertica/dbt-vertica'
pypi_package: 'dbt-vertica'
min_core_version: 'v1.6.0 and newer'
min_core_version: 'v1.7.0'
cloud_support: 'Not Supported'
min_supported_version: 'Vertica 23.4.0'
slack_channel_name: 'n/a'
Expand Down Expand Up @@ -46,10 +46,12 @@ your-profile:
username: [your username]
password: [your password]
database: [database name]
oauth_access_token: [access token]
schema: [dbt schema]
connection_load_balance: True
backup_server_node: [list of backup hostnames or IPs]
retries: [1 or more]

threads: [1 or more]
target: dev
```
Expand All @@ -70,6 +72,7 @@ your-profile:
| username | The username to use to connect to the server. | Yes | None | dbadmin|
password |The password to use for authenticating to the server. |Yes|None|my_password|
database |The name of the database running on the server. |Yes | None | my_db |
| oauth_access_token | To authenticate via OAuth, provide an OAuth Access Token that authorizes a user to the database. | No | "" | Default: "" |
schema| The schema to build models into.| No| None |VMart|
connection_load_balance| A Boolean value that indicates whether the connection can be redirected to a host in the database other than host.| No| True |True|
backup_server_node| List of hosts to connect to if the primary host specified in the connection (host, port) is unreachable. Each item in the list should be either a host string (using default port 5433) or a (host, port) tuple. A host can be a host name or an IP address.| No| None |['123.123.123.123','www.abc.com',('123.123.123.124',5433)]|
Expand Down
3 changes: 2 additions & 1 deletion website/docs/reference/resource-configs/trino-configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,8 +97,9 @@ The `dbt-trino` adapter supports these modes in `table` materialization, which y

- `rename` &mdash; Creates an intermediate table, renames the target table to the backup one, and renames the intermediate table to the target one.
- `drop` &mdash; Drops and re-creates a table. This overcomes the table rename limitation in AWS Glue.
- `replace` &mdash; Replaces a table using CREATE OR REPLACE clause. Support for table replacement varies across connectors. Refer to the connector documentation for details.

The recommended `table` materialization uses `on_table_exists = 'rename'` and is also the default. You can change this default configuration by editing _one_ of these files:
If CREATE OR REPLACE is supported in underlying connector, `replace` is recommended option. Otherwise, the recommended `table` materialization uses `on_table_exists = 'rename'` and is also the default. You can change this default configuration by editing _one_ of these files:
- the SQL file for your model
- the `dbt_project.yml` configuration file

Expand Down
2 changes: 2 additions & 0 deletions website/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -1027,6 +1027,8 @@ const sidebarSettings = {
id: "best-practices/how-we-build-our-metrics/semantic-layer-1-intro",
},
items: [
"best-practices/how-we-build-our-metrics/semantic-layer-1-intro",
"best-practices/how-we-build-our-metrics/semantic-layer-2-setup",
"best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models",
"best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics",
"best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart",
Expand Down
Loading

0 comments on commit c215a93

Please sign in to comment.