diff --git a/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md b/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md index eae8d595ca5..e871687d8cd 100644 --- a/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md +++ b/website/blog/2022-04-14-add-ci-cd-to-bitbucket.md @@ -10,6 +10,8 @@ hide_table_of_contents: false date: 2022-05-06 is_featured: true +keywords: + - dbt core pipeline, slim ci pipeline, slim cd pipeline, bitbucket --- diff --git a/website/dbt-versions.js b/website/dbt-versions.js index e5a2b9f4290..871c3ce601e 100644 --- a/website/dbt-versions.js +++ b/website/dbt-versions.js @@ -10,16 +10,14 @@ * @property {string} EOLDate "End of Life" date which is used to show the EOL banner * @property {boolean} isPrerelease Boolean used for showing the prerelease banner * @property {string} customDisplay Allows setting a custom display name for the current version + * + * customDisplay for dbt Cloud should be a version ahead of latest dbt Core release (GA or beta). */ exports.versions = [ { version: "1.9.1", customDisplay: "Cloud (Versionless)", }, - { - version: "1.9", - isPrerelease: true, - }, { version: "1.8", EOLDate: "2025-04-15", diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-semantic-structure.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-semantic-structure.md index 295d86e9c20..5bfbea82dda 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-semantic-structure.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-semantic-structure.md @@ -20,6 +20,10 @@ The first thing you need to establish is how you’re going to consistently stru It’s not terribly difficult to shift between these (it can be done with some relatively straightforward shell scripting), and this is purely a decision based on your developers’ preference (i.e. it has no impact on execution or performance), so don’t feel locked in to either path. Just pick the one that feels right and you can always shift down the road if you change your mind. +:::tip +Make sure to save all semantic models and metrics under the directory defined in the [`model-paths`](/reference/project-configs/model-paths) (or a subdirectory of it, like `models/semantic_models/`). If you save them outside of this path, it will result in an empty `semantic_manifest.json` file, and your semantic models or metrics won't be recognized. +::: + ## Naming Next, establish your system for consistent file naming: diff --git a/website/docs/docs/build/hooks-operations.md b/website/docs/docs/build/hooks-operations.md index 9ed20291c34..6cec2a673c0 100644 --- a/website/docs/docs/build/hooks-operations.md +++ b/website/docs/docs/build/hooks-operations.md @@ -72,6 +72,41 @@ You can use hooks to provide database-specific functionality not available out-o You can also use a [macro](/docs/build/jinja-macros#macros) to bundle up hook logic. Check out some of the examples in the reference sections for [on-run-start and on-run-end hooks](/reference/project-configs/on-run-start-on-run-end) and [pre- and post-hooks](/reference/resource-configs/pre-hook-post-hook). + + +```sql +{{ config( + pre_hook=[ + "{{ some_macro() }}" + ] +) }} +``` + + + + + +```yaml +models: + - name: + config: + pre_hook: + - "{{ some_macro() }}" +``` + + + + + +```yaml +models: + : + +pre-hook: + - "{{ some_macro() }}" +``` + + + ## About operations Operations are [macros](/docs/build/jinja-macros#macros) that you can run using the [`run-operation`](/reference/commands/run-operation) command. As such, operations aren't actually a separate resource in your dbt project — they are just a convenient way to invoke a macro without needing to run a model. diff --git a/website/docs/docs/build/jinja-macros.md b/website/docs/docs/build/jinja-macros.md index fc4a0cad3e8..bc91e3674c9 100644 --- a/website/docs/docs/build/jinja-macros.md +++ b/website/docs/docs/build/jinja-macros.md @@ -74,7 +74,7 @@ group by 1 You can recognize Jinja based on the delimiters the language uses, which we refer to as "curlies": - **Expressions `{{ ... }}`**: Expressions are used when you want to output a string. You can use expressions to reference [variables](/reference/dbt-jinja-functions/var) and call [macros](/docs/build/jinja-macros#macros). - **Statements `{% ... %}`**: Statements don't output a string. They are used for control flow, for example, to set up `for` loops and `if` statements, to [set](https://jinja.palletsprojects.com/en/3.1.x/templates/#assignments) or [modify](https://jinja.palletsprojects.com/en/3.1.x/templates/#expression-statement) variables, or to define macros. -- **Comments `{# ... #}`**: Jinja comments are used to prevent the text within the comment from executing or outputing a string. +- **Comments `{# ... #}`**: Jinja comments are used to prevent the text within the comment from executing or outputing a string. Don't use `--` for comment. When used in a dbt model, your Jinja needs to compile to a valid query. To check what SQL your Jinja compiles to: * **Using dbt Cloud:** Click the compile button to see the compiled SQL in the Compiled SQL pane diff --git a/website/docs/docs/cloud-integrations/avail-sl-integrations.md b/website/docs/docs/cloud-integrations/avail-sl-integrations.md index eea93c92b93..04d9d55acb4 100644 --- a/website/docs/docs/cloud-integrations/avail-sl-integrations.md +++ b/website/docs/docs/cloud-integrations/avail-sl-integrations.md @@ -20,7 +20,7 @@ import AvailIntegrations from '/snippets/_sl-partner-links.md'; ### Custom integration - [Exports](/docs/use-dbt-semantic-layer/exports) enable custom integration with additional tools that don't natively connect with the dbt Semantic Layer, such as PowerBI. -- Develop custom integrations using different languages and tools, supported through JDBC, ADBC, and GraphQL APIs. For more info, check out [our examples on GitHub](https://github.com/dbt-labs/example-semantic-layer-clients/). +- [Consume metrics](/docs/use-dbt-semantic-layer/consume-metrics) and develop custom integrations using different languages and tools, supported through [JDBC](/docs/dbt-cloud-apis/sl-jdbc), ADBC, and [GraphQL](/docs/dbt-cloud-apis/sl-graphql) APIs, and [Python SDK library](/docs/dbt-cloud-apis/sl-python). For more info, check out [our examples on GitHub](https://github.com/dbt-labs/example-semantic-layer-clients/). - Connect to any tool that supports SQL queries. These tools must meet one of the two criteria: - Offers a generic JDBC driver option (such as DataGrip) or - Is compatible Arrow Flight SQL JDBC driver version 12.0.0 or higher. diff --git a/website/docs/docs/cloud/about-cloud/browsers.md b/website/docs/docs/cloud/about-cloud/browsers.md index 12665bc7b72..1e26d3a6d59 100644 --- a/website/docs/docs/cloud/about-cloud/browsers.md +++ b/website/docs/docs/cloud/about-cloud/browsers.md @@ -27,4 +27,4 @@ To improve your experience using dbt Cloud, we suggest that you turn off ad bloc A session is a period of time during which you’re signed in to a dbt Cloud account from a browser. If you close your browser, it will end your session and log you out. You'll need to log in again the next time you try to access dbt Cloud. -If you've logged in using [SSO](/docs/cloud/manage-access/sso-overview) or [OAuth](/docs/cloud/git/connect-github#personally-authenticate-with-github), you can customize your maximum session duration, which might vary depending on your identity provider (IdP). +If you've logged in using [SSO](/docs/cloud/manage-access/sso-overview), you can customize your maximum session duration, which might vary depending on your identity provider (IdP). diff --git a/website/docs/docs/cloud/git/connect-github.md b/website/docs/docs/cloud/git/connect-github.md index 4dc4aaf73e9..f230f70e1f6 100644 --- a/website/docs/docs/cloud/git/connect-github.md +++ b/website/docs/docs/cloud/git/connect-github.md @@ -7,7 +7,6 @@ sidebar_label: "Connect to GitHub" Connecting your GitHub account to dbt Cloud provides convenience and another layer of security to dbt Cloud: -- Log into dbt Cloud using OAuth through GitHub. - Import new GitHub repositories with a couple clicks during dbt Cloud project setup. - Clone repos using HTTPS rather than SSH. - Trigger [Continuous integration](/docs/deploy/continuous-integration)(CI) builds when pull requests are opened in GitHub. @@ -48,15 +47,15 @@ To connect your dbt Cloud account to your GitHub account: - Read and write access to Workflows 6. Once you grant access to the app, you will be redirected back to dbt Cloud and shown a linked account success state. You are now personally authenticated. -7. Ask your team members to [personally authenticate](/docs/cloud/git/connect-github#personally-authenticate-with-github) by connecting their GitHub profiles. +7. Ask your team members to individually authenticate by connecting their [personal GitHub profiles](#authenticate-your-personal-github-account). ## Limiting repository access in GitHub If you are your GitHub organization owner, you can also configure the dbt Cloud GitHub application to have access to only select repositories. This configuration must be done in GitHub, but we provide an easy link in dbt Cloud to start this process. -## Personally authenticate with GitHub +## Authenticate your personal GitHub account -Once the dbt Cloud admin has [set up a connection](/docs/cloud/git/connect-github#installing-dbt-cloud-in-your-github-account) to your organization GitHub account, you need to personally authenticate, which improves the security of dbt Cloud by enabling you to log in using OAuth through GitHub. +After the dbt Cloud administrator [sets up a connection](/docs/cloud/git/connect-github#installing-dbt-cloud-in-your-github-account) to your organization's GitHub account, you need to authenticate using your personal account. You must connect your personal GitHub profile to dbt Cloud to use the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) and [CLI](/docs/cloud/cloud-cli-installation) and verify your read and write access to the repository. :::info GitHub profile connection @@ -77,7 +76,7 @@ To connect a personal GitHub account: 4. Once you approve authorization, you will be redirected to dbt Cloud, and you should now see your connected account. -The next time you log into dbt Cloud, you will be able to do so via OAuth through GitHub, and if you're on the Enterprise plan, you're ready to use the dbt Cloud IDE or dbt Cloud CLI. +You can now use the dbt Cloud IDE or dbt Cloud CLI. ## FAQs diff --git a/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md b/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md index 3b3b9c2d870..e9c4236438e 100644 --- a/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md +++ b/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md @@ -43,7 +43,7 @@ CREATE OR REPLACE SECURITY INTEGRATION DBT_CLOUD ENABLED = TRUE OAUTH_CLIENT = CUSTOM OAUTH_CLIENT_TYPE = 'CONFIDENTIAL' - OAUTH_REDIRECT_URI = LOCATED_REDIRECT_URI + OAUTH_REDIRECT_URI = 'LOCATED_REDIRECT_URI' OAUTH_ISSUE_REFRESH_TOKENS = TRUE OAUTH_REFRESH_TOKEN_VALIDITY = 7776000; ``` diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 8bdf47eae5a..3aec1956297 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -7,34 +7,45 @@ pagination_next: null pagination_prev: null --- -dbt Labs is in the process of migrating dbt Cloud to a new _cell-based architecture_. This architecture will be the foundation of dbt Cloud for years to come, and will bring improved scalability, reliability, and security to all customers and users of dbt Cloud. +dbt Labs is in the process of rolling out a new cell-based architecture for dbt Cloud. This architecture provides the foundation of dbt Cloud for years to come, and brings improved reliability, performance, and consistency to users of dbt Cloud. -There is some preparation required to ensure a successful migration. +We're scheduling migrations by account. When we're ready to migrate your account, you will receive a banner or email communication with your migration date. If you have not received this communication, then you don't need to take action at this time. dbt Labs will share information about your migration with you, with appropriate advance notice, when applicable to your account. -Migrations are being scheduled on a per-account basis. _If you haven't received any communication (either with a banner or by email) about a migration date, you don't need to take any action at this time._ dbt Labs will share migration date information with you, with appropriate advance notice, before we complete any migration steps in the dbt Cloud backend. +Your account will be automatically migrated on its scheduled date. However, if you use certain features, you must take action before that date to avoid service disruptions. -This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access. +## Recommended actions -## Pre-migration checklist +We highly recommended you take these actions: -Prior to your migration date, your dbt Cloud account admin will need to make some changes to your account. Most of your configurations will be migrated automatically, but a few will require manual intervention. +- Ensure pending user invitations are accepted or note outstanding invitations. Pending user invitations will be voided during the migration and must be resent after it is complete. +- Commit unsaved changes in the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud). Unsaved changes will be lost during migration. +- Export and download [audit logs](/docs/cloud/manage-access/audit-log) older than 90 days, as they will be lost during migration. If you lose critical logs older than 90 days during the migration, you will have to work with the dbt Labs Customer Support team to recover. -If your account is scheduled for migration, you will see a banner indicating your migration date when you log in. If you don't see a banner, you don't need to take any action. +## Required actions -1. **IP addresses** — dbt Cloud will be using new IPs to access your warehouse after the migration. Make sure to allow inbound traffic from these IPs in your firewall and include it in any database grants. All six of the IPs below should be added to allowlists. - * Old IPs: `52.45.144.63`, `54.81.134.249`, `52.22.161.231` - * New IPs: `52.3.77.232`, `3.214.191.130`, `34.233.79.135` -2. **User invitations** — Any pending user invitations will be invalidated during the migration. You can resend the invitations after the migration is complete. -3. **SSO integrations** — If you've completed the Auth0 migration, your account SSO configurations will be automatically transferred. If you haven't completed the Auth0 migration, dbt Labs recommends doing that before starting the mult-cell migration to avoid service disruptions. -4. **IDE sessions** — Any unsaved changes in the IDE might be lost during migration. dbt Labs _strongly_ recommends committing all changes in the IDE before your scheduled migration time. +These actions are required to prevent users from losing access dbt Cloud: -## Post-migration +- If you still need to, complete [Auth0 migration for SSO](/docs/cloud/manage-access/auth0-migration) before your scheduled migration date to avoid service disruptions. If you've completed the Auth0 migration, your account SSO configurations will be transferred automatically. +- Update your IP allow lists. dbt Cloud will be using new IPs to access your warehouse post-migration. Allow inbound traffic from all of the following new IPs in your firewall and include them in any database grants: -After migration, if you completed all the [Pre-migration checklist](#pre-migration-checklist) items, your dbt Cloud resources and jobs will continue to work as they did before. + - `52.3.77.232` + - `3.214.191.130` + - `34.233.79.135` -You have the option to log in to dbt Cloud at a different URL: - * If you were previously logging in at `cloud.getdbt.com`, you should instead plan to login at `us1.dbt.com`. The original URL will still work, but you’ll have to click through to be redirected upon login. - * You may also log in directly with your account’s unique [access URL](/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account). + Keep the old dbt Cloud IPs listed until the migration is complete. -:::info Login with GitHub -Users who previously used the "Login with GitHub" functionality will no longer be able to use this method to login to dbt Cloud after migration. To continue accessing your account, you can use your existing email and password. +## Post-migration​ + +Complete all of these items to ensure your dbt Cloud resources and jobs will continue working without interruption. + +Use one of these two URL login options: + +- `us1.dbt.com.` If you were previously logging in at `cloud.getdbt.com`, you should instead plan to log in at us1.dbt.com. The original URL will still work, but you’ll have to click through to be redirected upon login. +- `ACCOUNT_PREFIX.us1.dbt.com`: A unique URL specifically for your account. If you belong to multiple accounts, each will have a unique URL available as long as they have been migrated to multi-cell. +Check out [access, regions, and IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) for more information. + +Remove the following old IP addresses from your firewall and database grants: + +- `52.45.144.63` +- `54.81.134.249` +- `52.22.161.231` diff --git a/website/docs/docs/cloud/secure/ip-restrictions.md b/website/docs/docs/cloud/secure/ip-restrictions.md index 034b3a6c144..d39960dab42 100644 --- a/website/docs/docs/cloud/secure/ip-restrictions.md +++ b/website/docs/docs/cloud/secure/ip-restrictions.md @@ -13,7 +13,7 @@ import SetUpPages from '/snippets/_available-tiers-iprestrictions.md'; IP Restrictions help control which IP addresses are allowed to connect to dbt Cloud. IP restrictions allow dbt Cloud customers to meet security and compliance controls by only allowing approved IPs to connect to their dbt Cloud environment. This feature is supported in all regions across NA, Europe, and Asia-Pacific, but contact us if you have questions about availability. -## Configuring IP Restrictions +## Configuring IP restrictions To configure IP restrictions, go to **Account Settings** → **IP Restrictions**. IP restrictions provide two methods for determining which IPs can access dbt Cloud: an allowlist and a blocklist. IPs in the allowlist are allowed to access dbt Cloud, and IPs in the deny list will be blocked from accessing dbt Cloud. IP Restrictions can be used for a range of use cases, including: @@ -29,7 +29,7 @@ For any version control system integrations (Github, Gitlab, ADO, etc.) inbound To add an IP to the allowlist, from the **IP Restrictions** page: -1. Click **edit** +1. Click **Edit** 2. Click **Add Rule** 3. Add name and description for the rule - For example, Corporate VPN CIDR Range @@ -39,7 +39,9 @@ To add an IP to the allowlist, from the **IP Restrictions** page: - You can add multiple ranges in the same rule. 6. Click **Save** -Note that simply adding the IP Ranges will not enforce IP restrictions. For more information, see the section “Enabling Restrictions.” +Add multiple IP ranges by clicking the **Add IP range** button to create a new text field. + +Note that simply adding the IP Ranges will not enforce IP restrictions. For more information, see the [Enabling restrictions](#enabling-restrictions) section. If you only want to allow the IP ranges added to this list and deny all other requests, adding a denylist is not necessary. By default, if only an allow list is added, dbt Cloud will only allow IPs in the allowable range and deny all other IPs. However, you can add a denylist if you want to deny specific IP addresses within your allowlist CIDR range. @@ -65,9 +67,9 @@ It is possible to put an IP range on one list and then a sub-range or IP address ::: -## Enabling Restrictions +## Enabling restrictions -Once you are done adding all your ranges, IP restrictions can be enabled by selecting the **Enable IP restrictions** button and clicking **Save**. If your IP address is in any of the denylist ranges, you won’t be able to save or enable IP restrictions - this is done to prevent accidental account lockouts. If you do get locked out due to IP changes on your end, please reach out to support@dbtlabs.com +Once you are done adding all your ranges, IP restrictions can be enabled by selecting the **Enable IP restrictions** button and clicking **Save**. If your IP address is in any of the denylist ranges, you won’t be able to save or enable IP restrictions - this is done to prevent accidental account lockouts. If you do get locked out due to IP changes on your end, please reach out to support@getdbt.com Once enabled, when someone attempts to access dbt Cloud from a restricted IP, they will encounter one of the following messages depending on whether they use email & password or SSO login. diff --git a/website/docs/docs/collaborate/data-tile.md b/website/docs/docs/collaborate/data-tile.md index f40f21ebe18..446922acb92 100644 --- a/website/docs/docs/collaborate/data-tile.md +++ b/website/docs/docs/collaborate/data-tile.md @@ -9,9 +9,11 @@ image: /img/docs/collaborate/dbt-explorer/data-tile-pass.jpg # Embed data health tile in dashboards With data health tiles, stakeholders will get an at-a-glance confirmation on whether the data they’re looking at is stale or degraded. This trust signal allows teams to immediately go back into Explorer to see more details and investigate issues. + :::info Available in beta Data health tile is currently available in open beta. ::: + The data health tile: - Distills trust signals for data consumers. @@ -19,6 +21,8 @@ The data health tile: - Provides richer information and makes it easier to debug. - Revamps the existing, [job-based tiles](#job-based-data-health). +Data health tiles rely on [exposures](/docs/build/exposures) to surface trust signals in your dashboards. When you configure exposures in your dbt project, you are explicitly defining how specific outputs—like dashboards or reports—depend on your data models. + ## Prerequisites @@ -34,43 +38,45 @@ First, be sure to enable [source freshness](/docs/deploy/source-freshness) in 1. Navigate to dbt Explorer by clicking on the **Explore** link in the navigation. 2. In the main **Overview** page, go to the left navigation. -3. Under the **Resources** tab, click on **Exposures** to view the exposures list. +3. Under the **Resources** tab, click on **Exposures** to view the [exposures](/docs/build/exposures) list. 4. Select a dashboard exposure and go to the **General** tab to view the data health information. -5. In this tab, you’ll see: - - Data health status: Data freshness passed, Data quality passed, Data may be stale, Data quality degraded - - Name of the exposure. +5. In this tab, you’ll see: + - Name of the exposure. + - Data health status: Data freshness passed, Data quality passed, Data may be stale, Data quality degraded. - Resource type (model, source, and so on). - Dashboard status: Failure, Pass, Stale. - You can also see the last check completed, the last check time, and the last check duration. -6. You can also click the **Open Dashboard** button on the upper right to immediately view this in your analytics tool. +6. You can click the **Open Dashboard** button on the upper right to immediately view this in your analytics tool. ## Embed in your dashboard -Once you’ve navigated to the auto-exposure in dbt Explorer, you’ll need to set up your dashboard status tile and [service token](/docs/dbt-cloud-apis/service-tokens): +Once you’ve navigated to the auto-exposure in dbt Explorer, you’ll need to set up your data health tile and [service token](/docs/dbt-cloud-apis/service-tokens). You can embed data health tile to any analytics tool that supports URL or iFrame embedding. + +Follow these steps to set up your data health tile: 1. Go to **Account settings** in dbt Cloud. 2. Select **API tokens** in the left sidebar and then **Service tokens**. 3. Click on **Create service token** and give it a name. -4. Select the [**Metadata Only** permission](/docs/dbt-cloud-apis/service-tokens). This token will be used to embed the exposure tile in your dashboard in the later steps. +4. Select the [**Metadata Only**](/docs/dbt-cloud-apis/service-tokens) permission. This token will be used to embed the tile in your dashboard in the later steps. -5. Copy the **Metadata Only token** and save it in a secure location. You'll need it token in the next steps. +5. Copy the **Metadata Only** token and save it in a secure location. You'll need it token in the next steps. 6. Navigate back to dbt Explorer and select an exposure. 7. Below the **Data health** section, expand on the toggle for instructions on how to embed the exposure tile (if you're an account admin with develop permissions). 8. In the expanded toggle, you'll see a text field where you can paste your **Metadata Only token**. -9. Once you’ve pasted your token, you can select either **URL** or **iFrame** depending on which you need to install into your dashboard. +9. Once you’ve pasted your token, you can select either **URL** or **iFrame** depending on which you need to add to your dashboard. If your analytics tool supports iFrames, you can embed the dashboard tile within it. -### Embed data health tile in Tableau -To embed the data health tile in Tableau, follow these steps: +#### Tableau example +Here’s an example with Tableau, where you can embed the iFrame in a web page object: -1. Ensure you've copied the embed iFrame content in dbt Explorer. -2. For the revamped environment-based exposure tile you can insert these fields into the following iFrame, and then embed them with your dashboard. This is the iFrame that is available from the **Exposure details** page in dbt Explorer. +- Ensure you've copied the embed iFrame snippet from the dbt Explorer **Data health** section. +- **For the revamped environment-based exposure tile** — Insert the following fields into the following iFrame. Then embed them with your dashboard. This is the iFrame available from the **Exposure details** page in dbt Explorer. `