diff --git a/contributing/content-style-guide.md b/contributing/content-style-guide.md index 58f5ba2b21c..9189b403b73 100644 --- a/contributing/content-style-guide.md +++ b/contributing/content-style-guide.md @@ -557,14 +557,14 @@ The file or URL paths begin with: - /reference/ - /community/ -Let's use the Regions & IP Addresses URL as an example: https://docs.getdbt.com/docs/cloud/about-cloud/regions-ip-addresses +Let's use the Regions & IP Addresses URL as an example: https://docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses If we need to reference this on another page, we can remove the domain entirely: -`For more information about server availability, please refer to our [Regions & IP Addresses page](/docs/cloud/about-cloud/regions-ip-addresses)` +`For more information about server availability, please refer to our [Regions & IP Addresses page](/docs/cloud/about-cloud/access-regions-ip-addresses)` The reader will see: -For more information about server availability, please refer to our [Regions & IP Addresses page](/docs/cloud/about-cloud/regions-ip-addresses) +For more information about server availability, please refer to our [Regions & IP Addresses page](/docs/cloud/about-cloud/access-regions-ip-addresses) You can link to a specific section of the doc with a `#` at the end of the path. Enter the section’s title after the `#`, with individual words separated by hyphens. Let's use the incremental models page, https://docs.getdbt.com/docs/build/incremental-models, as an example: diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 432ed97635b..27b30a1963e 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -42,7 +42,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin ## dbt Cloud hosting and authentication -To use the dbt Cloud APIs, you'll need access to the customer’s access urls. Depending on their dbt Cloud setup, they'll have a different access URL. To find out more, refer to [Regions & IP addresses](https://docs.getdbt.com/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own URL to simplify support. +To use the dbt Cloud APIs, you'll need access to the customer’s access urls. Depending on their dbt Cloud setup, they'll have a different access URL. To find out more, refer to [Regions & IP addresses](https://docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own URL to simplify support. If the customer is on an Azure single tenant instance, they don't currently have access to the Discovery API or the Semantic Layer APIs. diff --git a/website/docs/docs/build/custom-schemas.md b/website/docs/docs/build/custom-schemas.md index b20d4130725..24cd4194a1c 100644 --- a/website/docs/docs/build/custom-schemas.md +++ b/website/docs/docs/build/custom-schemas.md @@ -4,26 +4,29 @@ id: "custom-schemas" pagination_next: "docs/build/custom-databases" --- -By default, all dbt models are built in the schema specified in your target. In dbt projects with lots of models, it may be useful to instead build some models in schemas other than your target schema – this can help logically group models together. +By default, all dbt models are built in the schema specified in your [environment](/docs/dbt-cloud-environments) (dbt Cloud) or [profile's target](/docs/core/dbt-core-environments) (dbt Core). This default schema is called your _target schema_. -For example, you may wish to: -* Group models based on the business unit using the model, creating schemas such as `core`, `marketing`, `finance` and `support`; or, +For dbt projects with lots of models, it's common to build models across multiple schemas and group similar models together. For example, you might want to: + +* Group models based on the business unit using the model, creating schemas such as `core`, `marketing`, `finance` and `support`. * Hide intermediate models in a `staging` schema, and only present models that should be queried by an end user in an `analytics` schema. -You can use **custom schemas** in dbt to build models in a schema other than your target schema. It's important to note that by default, dbt will generate the schema name for a model by **concatenating the custom schema to the target schema**, as in: `_;`. +To do this, specify a custom schema. dbt generates the schema name for a model by appending the custom schema to the target schema. For example, `_`. | Target schema | Custom schema | Resulting schema | | ------------- | ------------- | ---------------- | -| <target_schema> | None | <target_schema> | -| analytics | None | analytics | -| dbt_alice | None | dbt_alice | -| <target_schema> | <custom_schema> | <target_schema>\_<custom_schema> | -| analytics | marketing | analytics_marketing | -| dbt_alice | marketing | dbt_alice_marketing | +| analytics_prod | None | analytics_prod | +| alice_dev | None | alice_dev | +| dbt_cloud_pr_123_456 | None | dbt_cloud_pr_123_456 | +| analytics_prod | marketing | analytics_prod_marketing | +| alice_dev | marketing | alice_dev_marketing | +| dbt_cloud_pr_123_456 | marketing | dbt_cloud_pr_123_456_marketing | ## How do I use custom schemas? -Use the `schema` configuration key to specify a custom schema for a model. As with any configuration, you can either: -* apply this configuration to a specific model by using a config block within a model, or + +To specify a custom schema for a model, use the `schema` configuration key. As with any configuration, you can do one of the following: + +* apply this configuration to a specific model by using a config block within a model * apply it to a subdirectory of models by specifying it in your `dbt_project.yml` file @@ -36,12 +39,10 @@ select ... - - ```yaml -# models in `models/marketing/ will be rendered to the "*_marketing" schema +# models in `models/marketing/ will be built in the "*_marketing" schema models: my_project: marketing: @@ -52,17 +53,17 @@ models: ## Understanding custom schemas -When first using custom schemas, it's common to assume that a model will be built in a schema that matches the `schema` configuration exactly, for example, a model that has the configuration `schema: marketing`, would be built in the `marketing` schema. However, dbt instead creates it in a schema like `_marketing` by default – there's a good reason for this! +When first using custom schemas, it's a common misunderstanding to assume that a model _only_ uses the new `schema` configuration; for example, a model that has the configuration `schema: marketing` would be built in the `marketing` schema. However, dbt puts it in a schema like `_marketing`. -In a typical setup of dbt, each dbt user will use a separate target schema (see [Managing Environments](/docs/build/custom-schemas#managing-environments)). If dbt created models in a schema that matches a model's custom schema exactly, every dbt user would create models in the same schema. +There's a good reason for this deviation. Each dbt user has their own target schema for development (refer to [Managing Environments](#managing-environments)). If dbt ignored the target schema and only used the model's custom schema, every dbt user would create models in the same schema and would overwrite each other's work. -Further, the schema that your development models are built in would be the same schema that your production models are built in! Instead, concatenating the custom schema to the target schema helps create distinct schema names, reducing naming conflicts. +By combining the target schema and the custom schema, dbt ensures that objects it creates in your data warehouse don't collide with one another. If you prefer to use different logic for generating a schema name, you can change the way dbt generates a schema name (see below). ### How does dbt generate a model's schema name? -dbt uses a default macro called `generate_schema_name` to determine the name of the schema that a model should be built in. +dbt uses a default macro called `generate_schema_name` to determine the name of the schema that a model should be built in. The following code represents the default macro's logic: @@ -83,30 +84,23 @@ The following code represents the default macro's logic: {%- endmacro %} ``` -## Advanced custom schema configuration - -You can customize schema name generation in dbt depending on your needs, such as creating a custom macro named `generate_schema_name` in your project or using the built-in macro for environment-based schema names. The built-in macro follows a pattern of generating schema names based on the environment, making it a convenient alternative. - -If your dbt project has a macro that’s also named `generate_schema_name`, dbt will always use the macro in your dbt project instead of the default macro. - -### Changing the way dbt generates a schema name +## Changing the way dbt generates a schema name -To modify how dbt generates schema names, you should add a macro named `generate_schema_name` to your project and customize it according to your needs: +If your dbt project has a custom macro called `generate_schema_name`, dbt will use it instead of the default macro. This allows you to customize the name generation according to your needs. -- Copy and paste the `generate_schema_name` macro into a file named 'generate_schema_name'. +To customize this macro, copy the example code in the section [How does dbt generate a model's schema name](#how-does-dbt-generate-a-models-schema-name) into a file named `macros/generate_schema_name.sql` and make changes as necessary. -- Modify the target schema by either using [target variables](/reference/dbt-jinja-functions/target) or [env_var](/reference/dbt-jinja-functions/env_var). Check out our [Advanced Deployment - Custom Environment and job behavior](https://courses.getdbt.com/courses/advanced-deployment) course video for more details. - -**Note**: dbt will ignore any custom `generate_schema_name` macros included in installed packages. +Be careful. dbt will ignore any custom `generate_schema_name` macros included in installed packages.
❗️ Warning: Don't replace default_schema in the macro. -If you're modifying how dbt generates schema names, don't just replace ```{{ default_schema }}_{{ custom_schema_name | trim }}``` with ```{{ custom_schema_name | trim }}``` in the ```generate_schema_name``` macro. +If you're modifying how dbt generates schema names, don't just replace ```{{ default_schema }}_{{ custom_schema_name | trim }}``` with ```{{ custom_schema_name | trim }}``` in the ```generate_schema_name``` macro. If you remove ```{{ default_schema }}```, it causes developers to override each other's models if they create their own custom schemas. This can also cause issues during development and continuous integration (CI). -❌ The following code block is an example of what your code _should not_ look like: +❌ The following code block is an example of what your code _should not_ look like: + ```sql {% macro generate_schema_name(custom_schema_name, node) -%} @@ -123,39 +117,9 @@ If you remove ```{{ default_schema }}```, it causes developers to override each {%- endmacro %} -``` -
- -### An alternative pattern for generating schema names - -A common way to generate schema names is by adjusting the behavior according to the environment in dbt. Here's how it works: - -**Production environment** - -- If a custom schema is specified, the schema name of a model should match the custom schema, instead of concatenating to the target schema. -- If no custom schema is specified, the schema name of a model should match the target schema. - -**Other environments** (like development or quality assurance (QA)): - -- Build _all_ models in the target schema, ignoring any custom schema configurations. - -dbt ships with a global, predefined macro that contains this logic - `generate_schema_name_for_env`. - -If you want to use this pattern, you'll need a `generate_schema_name` macro in your project that points to this logic. You can do this by creating a file in your `macros` directory (typically named `get_custom_schema.sql`), and copying/pasting the following code: - - - -```sql --- put this in macros/get_custom_schema.sql - -{% macro generate_schema_name(custom_schema_name, node) -%} - {{ generate_schema_name_for_env(custom_schema_name, node) }} -{%- endmacro %} ``` - - -**Note:** When using this macro, you'll need to set the target name in your job specifically to "prod" if you want custom schemas to be applied. + ### generate_schema_name arguments @@ -165,6 +129,7 @@ If you want to use this pattern, you'll need a `generate_schema_name` macro in y | node | The `node` that is currently being processed by dbt | `{"name": "my_model", "resource_type": "model",...}` | ### Jinja context available in generate_schema_name + If you choose to write custom logic to generate a schema name, it's worth noting that not all variables and methods are available to you when defining this logic. In other words: the `generate_schema_name` macro is compiled with a limited Jinja context. The following context methods _are_ available in the `generate_schema_name` macro: @@ -192,13 +157,52 @@ See docs on macro `dispatch`: ["Managing different global overrides across packa +## A built-in alternative pattern for generating schema names + +A common customization is to ignore the target schema in production environments, and ignore the custom schema configurations in other environments (such as development and CI). + +Production Environment (`target.name == 'prod'`) + +| Target schema | Custom schema | Resulting schema | +| ------------- | ------------- | ---------------- | +| analytics_prod | None | analytics_prod | +| analytics_prod | marketing | marketing | + +Development/CI Environment (`target.name != 'prod'`) + +| Target schema | Custom schema | Resulting schema | +| ------------- | ------------- | ---------------- | +| alice_dev | None | alice_dev | +| alice_dev | marketing | alice_dev | +| dbt_cloud_pr_123_456 | None | dbt_cloud_pr_123_456 | +| dbt_cloud_pr_123_456 | marketing | dbt_cloud_pr_123_456 | + +Similar to the regular macro, this approach guarantees that schemas from different environments will not collide. + +dbt ships with a macro for this use case — called `generate_schema_name_for_env` — which is disabled by default. To enable it, add a custom `generate_schema_name` macro to your project that contains the following code: + + + +```sql +-- put this in macros/get_custom_schema.sql + +{% macro generate_schema_name(custom_schema_name, node) -%} + {{ generate_schema_name_for_env(custom_schema_name, node) }} +{%- endmacro %} +``` + + + +When using this macro, you'll need to set the target name in your production job to `prod`. + ## Managing environments -In the `generate_schema_name` macro examples shown above, the `target.name` context variable is used to change the schema name that dbt generates for models. If the `generate_schema_name` macro in your project uses the `target.name` context variable, you must additionally ensure that your different dbt environments are configured appropriately. While you can use any naming scheme you'd like, we typically recommend: - - **dev**: Your local development environment; configured in a `profiles.yml` file on your computer. -* **ci**: A [continuous integration](/docs/cloud/git/connect-github) environment running on Pull Requests in GitHub, GitLab, etc. - - **prod**: The production deployment of your dbt project, like in dbt Cloud, Airflow, or [similar](/docs/deploy/deployments). +In the `generate_schema_name` macro examples shown in the [built-in alternative pattern](#a-built-in-alternative-pattern-for-generating-schema-names) section, the `target.name` context variable is used to change the schema name that dbt generates for models. If the `generate_schema_name` macro in your project uses the `target.name` context variable, you must ensure that your different dbt environments are configured accordingly. While you can use any naming scheme you'd like, we typically recommend: + +* **dev** — Your local development environment; configured in a `profiles.yml` file on your computer. +* **ci** — A [continuous integration](/docs/cloud/git/connect-github) environment running on pull pequests in GitHub, GitLab, and so on. +* **prod** — The production deployment of your dbt project, like in dbt Cloud, Airflow, or [similar](/docs/deploy/deployments). -If your schema names are being generated incorrectly, double check your target name in the relevant environment. +If your schema names are being generated incorrectly, double-check your target name in the relevant environment. For more information, consult the [managing environments in dbt Core](/docs/core/dbt-core-environments) guide. diff --git a/website/docs/docs/build/saved-queries.md b/website/docs/docs/build/saved-queries.md index 2ad16b86f0d..3f61de05cac 100644 --- a/website/docs/docs/build/saved-queries.md +++ b/website/docs/docs/build/saved-queries.md @@ -33,6 +33,4 @@ saved_queries: - "{{ Dimension('listing__capacity_latest') }} > 3" ``` -### FAQs - -* All metrics in a saved query need to use the same dimensions in the `group_by` or `where` clauses. +All metrics in a saved query need to use the same dimensions in the `group_by` or `where` clauses. diff --git a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md index 119201b389d..72ab367d6de 100644 --- a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md +++ b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md @@ -1,6 +1,7 @@ --- -title: "Regions & IP addresses" -id: "regions-ip-addresses" +title: "Access, Regions, & IP addresses" +sidebar: "Access, Regions, & IP Addresses" +id: "access-regions-ip-addresses" description: "Available regions and ip addresses" --- @@ -20,7 +21,19 @@ dbt Cloud is [hosted](/docs/cloud/about-cloud/architecture) in multiple regions [^1]: These regions support [multi-tenant](/docs/cloud/about-cloud/tenancy) deployment environments hosted by dbt Labs. -### Locating your dbt Cloud IP addresses +## Accessing your account + +To log into dbt Cloud, use the URL that applies to your environment. Your access URL used will depend on a few factors, including location and tenancy: +- **US multi-tenant:** Use your unique URL that starts with your account prefix, followed by `us1.dbt.com`. For example, `abc123.us1.dbt.com`. You can also use `cloud.getdbt.com`, but this URL will be removed in the future. + - If you are unsure of your access URL, navigate to `us1.dbt.com` and enter your dbt Cloud credentials. If you are a member of a single account, you will be logged in, and your URL will be displayed in the browser. If you are a member of multiple accounts, you will be presented with a list of options, along with the appropriate login URLs for each. + + + +- **EMEA multi-tenant:** Use `emea.dbt.com`. +- **APAC multi-tenant:** Use `au.dbt.com`. +- **Worldwide single-tenant and VPC:** Use the vanity URL provided during your onboarding. + +## Locating your dbt Cloud IP addresses There are two ways to view your dbt Cloud IP addresses: - If no projects exist in the account, create a new project, and the IP addresses will be displayed during the **Configure your environment** steps. diff --git a/website/docs/docs/cloud/billing.md b/website/docs/docs/cloud/billing.md index b677f06ccfe..44ef0af90fe 100644 --- a/website/docs/docs/cloud/billing.md +++ b/website/docs/docs/cloud/billing.md @@ -7,7 +7,7 @@ pagination_next: null pagination_prev: null --- -dbt Cloud offers a variety of [plans and pricing](https://www.getdbt.com/pricing/) to fit your organization’s needs. With flexible billing options that appeal to large enterprises and small businesses and [server availability](/docs/cloud/about-cloud/regions-ip-addresses) worldwide, dbt Cloud is the fastest and easiest way to begin transforming your data. +dbt Cloud offers a variety of [plans and pricing](https://www.getdbt.com/pricing/) to fit your organization’s needs. With flexible billing options that appeal to large enterprises and small businesses and [server availability](/docs/cloud/about-cloud/access-regions-ip-addresses) worldwide, dbt Cloud is the fastest and easiest way to begin transforming your data. ## How does dbt Cloud pricing work? diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md index 8112c8f3bd1..e66642067b6 100644 --- a/website/docs/docs/cloud/cloud-cli-installation.md +++ b/website/docs/docs/cloud/cloud-cli-installation.md @@ -23,7 +23,7 @@ dbt commands are run against dbt Cloud's infrastructure and benefit from: ## Prerequisites -The dbt Cloud CLI is available in all [deployment regions](/docs/cloud/about-cloud/regions-ip-addresses) and for both multi-tenant and single-tenant accounts (Azure single-tenant not supported at this time). +The dbt Cloud CLI is available in all [deployment regions](/docs/cloud/about-cloud/access-regions-ip-addresses) and for both multi-tenant and single-tenant accounts (Azure single-tenant not supported at this time). - Ensure you are using dbt version 1.5 or higher. Refer to [dbt Cloud versions](/docs/dbt-versions/upgrade-core-in-cloud) to upgrade. - Note that SSH tunneling for [Postgres and Redshift](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb) connections doesn't support the dbt Cloud CLI yet. diff --git a/website/docs/docs/cloud/connect-data-platform/about-connections.md b/website/docs/docs/cloud/connect-data-platform/about-connections.md index 93bbf83584f..d388bae4549 100644 --- a/website/docs/docs/cloud/connect-data-platform/about-connections.md +++ b/website/docs/docs/cloud/connect-data-platform/about-connections.md @@ -28,7 +28,7 @@ These connection instructions provide the basic fields required for configuring ## IP Restrictions -dbt Cloud will always connect to your data platform from the IP addresses specified in the [Regions & IP addresses](/docs/cloud/about-cloud/regions-ip-addresses) page. +dbt Cloud will always connect to your data platform from the IP addresses specified in the [Regions & IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) page. Be sure to allow traffic from these IPs in your firewall, and include them in any database grants. diff --git a/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md b/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md index 06b9dd62f1a..03303ea8d52 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md @@ -46,7 +46,7 @@ Make sure the location of the instance is the same Virtual Private Cloud (VPC) a To configure the SSH tunnel in dbt Cloud, you'll need to provide the hostname/IP of your bastion server, username, and port, of your choosing, that dbt Cloud will connect to. Review the following steps: -- Verify the bastion server has its network security rules set up to accept connections from the [dbt Cloud IP addresses](/docs/cloud/about-cloud/regions-ip-addresses) on whatever port you configured. +- Verify the bastion server has its network security rules set up to accept connections from the [dbt Cloud IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) on whatever port you configured. - Set up the user account by using the bastion servers instance's CLI, The following example uses the username `dbtcloud:` ```shell diff --git a/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md b/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md index 0d87a790042..5febb3fe766 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md @@ -102,7 +102,7 @@ The IDE uses developer credentials to connect to your data platform. These devel Set up your developer credentials: -1. Navigate to your **Credentials** under **Your Profile** settings, which you can access at `https://YOUR_ACCESS_URL/settings/profile#credentials`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. +1. Navigate to your **Credentials** under **Your Profile** settings, which you can access at `https://YOUR_ACCESS_URL/settings/profile#credentials`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. 2. Select the relevant project in the list. 3. Click **Edit** on the bottom right of the page. 4. Enter the details under **Development Credentials**. diff --git a/website/docs/docs/cloud/git/connect-gitlab.md b/website/docs/docs/cloud/git/connect-gitlab.md index e55552e2d86..f68f09ae73d 100644 --- a/website/docs/docs/cloud/git/connect-gitlab.md +++ b/website/docs/docs/cloud/git/connect-gitlab.md @@ -63,7 +63,7 @@ In GitLab, when creating your Group Application, input the following: | **Confidential** | ✔️ | | **Scopes** | ✔️ api | -Replace `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. +Replace `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. The application form in GitLab should look as follows when completed: diff --git a/website/docs/docs/cloud/git/setup-azure.md b/website/docs/docs/cloud/git/setup-azure.md index 843371be6ea..ab75ee40ada 100644 --- a/website/docs/docs/cloud/git/setup-azure.md +++ b/website/docs/docs/cloud/git/setup-azure.md @@ -31,7 +31,7 @@ Once the Azure AD app is added to dbt Cloud and the service user is connected, t 4. Provide a name for your app. We recommend using, "dbt Labs Azure DevOps App". 5. Select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)** as the Supported Account Types. Many customers ask why they need to select Multitenant instead of Single tenant, and they frequently get this step wrong. Microsoft considers Azure DevOps (formerly called Visual Studio) and Azure Active Directory as separate tenants, and in order for this Active Directory application to work properly, you must select Multitenant. -6. Add a redirect URI by selecting **Web** and, in the field, entering `https://YOUR_ACCESS_URL/complete/azure_active_directory`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. +6. Add a redirect URI by selecting **Web** and, in the field, entering `https://YOUR_ACCESS_URL/complete/azure_active_directory`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. 7. Click **Register**. @@ -59,7 +59,7 @@ You also need to add another redirect URI to your Azure AD application. This red 1. Navigate to your Azure AD application. 2. Select the link next to **Redirect URIs** -3. Click **Add URI** and add the URI, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan: +3. Click **Add URI** and add the URI, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan: `https://YOUR_ACCESS_URL/complete/azure_active_directory_service_user` 4. Click **Save**. diff --git a/website/docs/docs/cloud/manage-access/invite-users.md b/website/docs/docs/cloud/manage-access/invite-users.md index 242bc977dc8..c82e15fd48f 100644 --- a/website/docs/docs/cloud/manage-access/invite-users.md +++ b/website/docs/docs/cloud/manage-access/invite-users.md @@ -35,7 +35,7 @@ You must have proper permissions to invite new users: ## User experience -dbt Cloud generates and sends emails from `support@getdbt.com` to the specified addresses. Make sure traffic from the `support@getdbt.com` email is allowed in your settings to avoid emails from going to spam or being blocked. This is the originating email address for all [instances worldwide](/docs/cloud/about-cloud/regions-ip-addresses). +dbt Cloud generates and sends emails from `support@getdbt.com` to the specified addresses. Make sure traffic from the `support@getdbt.com` email is allowed in your settings to avoid emails from going to spam or being blocked. This is the originating email address for all [instances worldwide](/docs/cloud/about-cloud/access-regions-ip-addresses). The email contains a link to create an account. When the user clicks on this they will be brought to one of two screens depending on whether SSO is configured or not. diff --git a/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md b/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md index f717bf3a5b1..5688802cdfd 100644 --- a/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md +++ b/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md @@ -34,7 +34,7 @@ On the **Credentials** page, you can see your existing keys, client IDs, and ser Set up an [OAuth consent screen](https://support.google.com/cloud/answer/6158849) if you haven't already. Then, click **+ Create Credentials** at the top of the page and select **OAuth client ID**. -Fill in the application, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan: +Fill in the application, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan: | Config | Value | | ------ | ----- | diff --git a/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md b/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md index 679133b7844..e790c234696 100644 --- a/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md +++ b/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md @@ -45,7 +45,7 @@ These parameters and descriptions will help you authenticate with your username | **USERNAME** | Your Databricks username (account admin level) | | **PASSWORD** | Your Databricks password (account admin level) | | **ACCOUNT_ID** | Your Databricks [account ID](https://docs.databricks.com/en/administration-guide/account-settings/index.html#locate-your-account-id) | -| **YOUR_ACCESS_URL** | The [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your dbt Cloud account region and plan | +| **YOUR_ACCESS_URL** | The [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your dbt Cloud account region and plan | | **NAME** | The integration name (i.e 'databricks-dbt-cloud') After running the `curl`, you'll get an API response that includes the `client_id` and `client_secret` required in the following section. At this time, this is the only way to retrieve the secret. If you lose the secret, then the integration needs to be [deleted](https://docs.databricks.com/api/account/customappintegration/delete) and re-created. diff --git a/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md b/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md index 5b9abb6058a..444374cc47e 100644 --- a/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md +++ b/website/docs/docs/cloud/manage-access/set-up-snowflake-oauth.md @@ -17,7 +17,7 @@ To enable Snowflake OAuth, you will need to create a [security integration](http ### Create a security integration -In Snowflake, execute a query to create a security integration. Please find the complete documentation on creating a security integration for custom clients [here](https://docs.snowflake.net/manuals/sql-reference/sql/create-security-integration.html#syntax). In the following example `create or replace security integration` query, replace `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. +In Snowflake, execute a query to create a security integration. Please find the complete documentation on creating a security integration for custom clients [here](https://docs.snowflake.net/manuals/sql-reference/sql/create-security-integration.html#syntax). In the following example `create or replace security integration` query, replace `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. ``` CREATE OR REPLACE SECURITY INTEGRATION DBT_CLOUD @@ -42,7 +42,7 @@ CREATE OR REPLACE SECURITY INTEGRATION DBT_CLOUD | ENABLED | Required | | OAUTH_CLIENT | Required | | OAUTH_CLIENT_TYPE | Required | -| OAUTH_REDIRECT_URI | Required. Use the access URL that corresponds to your server [region](/docs/cloud/about-cloud/regions-ip-addresses). If dbt Cloud is deployed on-premises, use the domain name of your application instead of the access URL. | +| OAUTH_REDIRECT_URI | Required. Use the access URL that corresponds to your server [region](/docs/cloud/about-cloud/access-regions-ip-addresses). If dbt Cloud is deployed on-premises, use the domain name of your application instead of the access URL. | | OAUTH_ISSUE_REFRESH_TOKENS | Required | | OAUTH_REFRESH_TOKEN_VALIDITY | Required. This configuration dictates the number of seconds that a refresh token is valid for. Use a smaller value to force users to re-authenticate with Snowflake more frequently. | @@ -103,7 +103,7 @@ This error might be because of a configuration issue in the Snowflake OAuth flow * In the Snowflake OAuth flow, `role` in the profile config is not optional, as it does not inherit from the project connection config. So each user must supply their role, regardless of whether it is provided in the project connection. #### Server error 500 -If you experience a 500 server error when redirected from Snowflake to dbt Cloud, double-check that you have allow listed [dbt Cloud's IP addresses](/docs/cloud/about-cloud/regions-ip-addresses) on a Snowflake account level. +If you experience a 500 server error when redirected from Snowflake to dbt Cloud, double-check that you have allow listed [dbt Cloud's IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) on a Snowflake account level. Enterprise customers who have single-tenant deployments will have a different range of IP addresses (network CIDR ranges) to allow list. diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md b/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md index 19779baf615..e4ff998015c 100644 --- a/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md +++ b/website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md @@ -96,7 +96,7 @@ Settings. account using GSuite auth. Optionally, you may specify a CSV of domains which are _all_ authorized to access your dbt Cloud account (eg. `dbtlabs.com, fishtowndata.com`) - **Slug**: Enter your desired login slug. Users will be able to log into dbt - Cloud by navigating to `https://YOUR_ACCESS_URL/enterprise-login/LOGIN-SLUG`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. The `LOGIN-SLUG` must + Cloud by navigating to `https://YOUR_ACCESS_URL/enterprise-login/LOGIN-SLUG`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. The `LOGIN-SLUG` must be unique across all dbt Cloud accounts, so pick a slug that uniquely identifies your company. diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md index 4079cc488c4..53986513ce2 100644 --- a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md +++ b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md @@ -61,7 +61,7 @@ Click **Next** to continue. ### Configure SAML Settings -The SAML Settings page configures how Okta and dbt Cloud communicate. You will want to use an [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. +The SAML Settings page configures how Okta and dbt Cloud communicate. You will want to use an [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. To complete this section, you will need a _login slug_. This slug controls the URL where users on your account can log into your application via Okta. Login @@ -172,7 +172,7 @@ configured in the steps above. | **Identity Provider SSO Url** | Paste the **Identity Provider Single Sign-On URL** shown in the Okta setup instructions | | **Identity Provider Issuer** | Paste the **Identity Provider Issuer** shown in the Okta setup instructions | | **X.509 Certificate** | Paste the **X.509 Certificate** shown in the Okta setup instructions;
**Note:** When the certificate expires, an Okta admin will have to generate a new one to be pasted into dbt Cloud for uninterrupted application access. | -| **Slug** | Enter your desired login slug. Users will be able to log into dbt Cloud by navigating to `https://YOUR_ACCESS_URL/enterprise-login/LOGIN-SLUG`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. Login slugs must be unique across all dbt Cloud accounts, so pick a slug that uniquely identifies your company. | +| **Slug** | Enter your desired login slug. Users will be able to log into dbt Cloud by navigating to `https://YOUR_ACCESS_URL/enterprise-login/LOGIN-SLUG`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. Login slugs must be unique across all dbt Cloud accounts, so pick a slug that uniquely identifies your company. | " }` * `VARIABLES` with a dictionary of your GraphQL query variables, such as a job ID or a filter. diff --git a/website/docs/docs/dbt-cloud-apis/migrating-to-v2.md b/website/docs/docs/dbt-cloud-apis/migrating-to-v2.md index 3e6ac2c3577..72616f4b19c 100644 --- a/website/docs/docs/dbt-cloud-apis/migrating-to-v2.md +++ b/website/docs/docs/dbt-cloud-apis/migrating-to-v2.md @@ -10,7 +10,7 @@ In an attempt to provide an improved dbt Cloud Administrative API experience, th ## Key differences -When using the [List runs](/dbt-cloud/api-v2-legacy#tag/Runs) endpoint, you can include triggered runs and sort by ID. You can use the following request in v2 to get a similar response as v4, replacing the `{accountId}` with your own and `{YOUR_ACCESS_URL}` with the appropriate [Access URL](https://docs.getdbt.com/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan: +When using the [List runs](/dbt-cloud/api-v2-legacy#tag/Runs) endpoint, you can include triggered runs and sort by ID. You can use the following request in v2 to get a similar response as v4, replacing the `{accountId}` with your own and `{YOUR_ACCESS_URL}` with the appropriate [Access URL](https://docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan: ```shell GET https://{YOUR_ACCESS_URL}/api/v2/accounts/{accountId}/runs/?include_related=[%22trigger%22]&order_by=-id diff --git a/website/docs/docs/dbt-cloud-apis/sl-graphql.md b/website/docs/docs/dbt-cloud-apis/sl-graphql.md index e2a473b23e9..00ab623b6d2 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-graphql.md +++ b/website/docs/docs/dbt-cloud-apis/sl-graphql.md @@ -24,7 +24,7 @@ GraphQL has several advantages, such as self-documenting, having a strong typing The dbt Semantic Layer GraphQL API allows you to explore and query metrics and dimensions. Due to its self-documenting nature, you can explore the calls conveniently through a schema explorer. -The schema explorer URLs vary depending on your [deployment region](/docs/cloud/about-cloud/regions-ip-addresses). Use the following table to find the right link for your region: +The schema explorer URLs vary depending on your [deployment region](/docs/cloud/about-cloud/access-regions-ip-addresses). Use the following table to find the right link for your region: | Deployment type | Schema explorer URL | | --------------- | ------------------- | diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md index 4727fcdbd21..991fc53b3dd 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md +++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md @@ -52,7 +52,7 @@ jdbc:arrow-flight-sql://semantic-layer.cloud.getdbt.com:443?&environmentId=20233 | JDBC parameter | Description | Example | | -------------- | ----------- | ------- | | `jdbc:arrow-flight-sql://` | The protocol for the JDBC driver. | `jdbc:arrow-flight-sql://` | -| `semantic-layer.cloud.getdbt.com` | The [access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your account's dbt Cloud region. You must always add the `semantic-layer` prefix before the access URL. | For dbt Cloud deployment hosted in North America, use `semantic-layer.cloud.getdbt.com` | +| `semantic-layer.cloud.getdbt.com` | The [access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your account's dbt Cloud region. You must always add the `semantic-layer` prefix before the access URL. | For dbt Cloud deployment hosted in North America, use `semantic-layer.cloud.getdbt.com` | | `environmentId` | The unique identifier for the dbt production environment, you can retrieve this from the dbt Cloud URL
when you navigate to **Environments** under **Deploy**. | If your URL ends with `.../environments/222222`, your `environmentId` is `222222`

| | `SERVICE_TOKEN` | dbt Cloud [service token](/docs/dbt-cloud-apis/service-tokens) with “Semantic Layer Only” and "Metadata Only" permissions. Create a new service token on the **Account Settings** page. | `token=SERVICE_TOKEN` | diff --git a/website/docs/docs/dbt-versions/release-notes/72-Feb-2024/override-dbt-version.md b/website/docs/docs/dbt-versions/release-notes/72-Feb-2024/override-dbt-version.md new file mode 100644 index 00000000000..389665d8ba8 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/72-Feb-2024/override-dbt-version.md @@ -0,0 +1,15 @@ +--- +title: "New: Override dbt version with new User development settings" +description: "February 2024: Test new dbt features on your user account before safely upgrading the dbt version in your development environment." +sidebar_label: "New: Override dbt version" +sidebar_position: 10 +tags: [Feb-2024] +date: 2024-02-02 +--- + +You can now [override the dbt version](/docs/dbt-versions/upgrade-core-in-cloud#override-dbt-version) that's configured for the development environment within your project and use a different version — affecting only your user account. This lets you test new dbt features without impacting other people working on the same project. And when you're satisfied with the test results, you can safely upgrade the dbt version for your project(s). + +Use the **dbt version** dropdown to specify the version to override with. It's available on your project's credentials page in the **User development settings** section. For example: + + + diff --git a/website/docs/docs/dbt-versions/release-notes/76-Oct-2023/api-v2v3-limit.md b/website/docs/docs/dbt-versions/release-notes/76-Oct-2023/api-v2v3-limit.md index 9768886d5fb..fb27e8e1727 100644 --- a/website/docs/docs/dbt-versions/release-notes/76-Oct-2023/api-v2v3-limit.md +++ b/website/docs/docs/dbt-versions/release-notes/76-Oct-2023/api-v2v3-limit.md @@ -10,6 +10,6 @@ tags: [Oct-2023, API] Beginning December 1, 2023, the [Administrative API](/docs/dbt-cloud-apis/admin-cloud-api) v2 and v3 will expect you to limit all "list" or `GET` API methods to 100 results per API request. This limit enhances the efficiency and stability of our services. If you need to handle more than 100 results, then use the `limit` and `offset` query parameters to paginate those results; otherwise, you will receive an error. -This maximum limit applies to [multi-tenant instances](/docs/cloud/about-cloud/regions-ip-addresses) only, and _does not_ apply to single tenant instances. +This maximum limit applies to [multi-tenant instances](/docs/cloud/about-cloud/access-regions-ip-addresses) only, and _does not_ apply to single tenant instances. Refer to the [API v3 Pagination](https://docs.getdbt.com/dbt-cloud/api-v3#/) or [API v2 Pagination](https://docs.getdbt.com/dbt-cloud/api-v2#/) sections for more information on how to paginate your API responses. diff --git a/website/docs/docs/dbt-versions/release-notes/76-Oct-2023/sl-ga.md b/website/docs/docs/dbt-versions/release-notes/76-Oct-2023/sl-ga.md index 9d5b91fb191..06818042539 100644 --- a/website/docs/docs/dbt-versions/release-notes/76-Oct-2023/sl-ga.md +++ b/website/docs/docs/dbt-versions/release-notes/76-Oct-2023/sl-ga.md @@ -17,7 +17,7 @@ It aims to bring the best of modeling and semantics to downstream applications b - Brand new [integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations) such as Tableau, Google Sheets, Hex, Mode, and Lightdash. - New [Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview) using GraphQL and JDBC to query metrics and build integrations. -- dbt Cloud [multi-tenant regional](/docs/cloud/about-cloud/regions-ip-addresses) support for North America, EMEA, and APAC. Single-tenant support coming soon. +- dbt Cloud [multi-tenant regional](/docs/cloud/about-cloud/access-regions-ip-addresses) support for North America, EMEA, and APAC. Single-tenant support coming soon. - Coming soon — Schedule exports (a way to build tables in your data platform) as part of your dbt Cloud job. Use the APIs to call an export, then access them in your preferred BI tool. diff --git a/website/docs/docs/dbt-versions/release-notes/78-Aug-2023/sl-revamp-beta.md b/website/docs/docs/dbt-versions/release-notes/78-Aug-2023/sl-revamp-beta.md index ac8e286c783..112fdfe4db0 100644 --- a/website/docs/docs/dbt-versions/release-notes/78-Aug-2023/sl-revamp-beta.md +++ b/website/docs/docs/dbt-versions/release-notes/78-Aug-2023/sl-revamp-beta.md @@ -44,7 +44,7 @@ By bringing these enhancements to the dbt Semantic Layer, we enable organization The dbt Semantic Layer is currently available as a public beta, which means: -- **Who** — To experience the new dbt Semantic Layer, you must be on a dbt Cloud [Team and Enterprise](https://www.getdbt.com/pricing/) multi-tenant dbt Cloud plan, [hosted](/docs/cloud/about-cloud/regions-ip-addresses) in North America and on dbt v1.6 and higher. Look out for announcements on removing the location requirement soon. +- **Who** — To experience the new dbt Semantic Layer, you must be on a dbt Cloud [Team and Enterprise](https://www.getdbt.com/pricing/) multi-tenant dbt Cloud plan, [hosted](/docs/cloud/about-cloud/access-regions-ip-addresses) in North America and on dbt v1.6 and higher. Look out for announcements on removing the location requirement soon. - Developer plans or dbt Core users can use MetricFlow to define and test metrics using the dbt MetricFlow CLI only. diff --git a/website/docs/docs/dbt-versions/release-notes/80-June-2023/admin-api-rn.md b/website/docs/docs/dbt-versions/release-notes/80-June-2023/admin-api-rn.md index 2008331ebe6..b486c90b881 100644 --- a/website/docs/docs/dbt-versions/release-notes/80-June-2023/admin-api-rn.md +++ b/website/docs/docs/dbt-versions/release-notes/80-June-2023/admin-api-rn.md @@ -11,5 +11,5 @@ dbt Labs updated the docs for the [dbt Cloud Administrative API](/docs/dbt-cloud - Now using Spotlight for improved UI and UX. - All endpoints are now documented for v2 and v3. Added automation to the docs so they remain up to date. - Documented many of the request and response bodies. -- You can now test endpoints directly from within the API docs. And, you can choose which [regional server](/docs/cloud/about-cloud/regions-ip-addresses) to use (North America, APAC, or EMEA). +- You can now test endpoints directly from within the API docs. And, you can choose which [regional server](/docs/cloud/about-cloud/access-regions-ip-addresses) to use (North America, APAC, or EMEA). - With the new UI, you can more easily generate code for any endpoint. diff --git a/website/docs/docs/dbt-versions/release-notes/82-April-2023/api-endpoint-restriction.md b/website/docs/docs/dbt-versions/release-notes/82-April-2023/api-endpoint-restriction.md index 8507fe3dbbb..04b669f75ba 100644 --- a/website/docs/docs/dbt-versions/release-notes/82-April-2023/api-endpoint-restriction.md +++ b/website/docs/docs/dbt-versions/release-notes/82-April-2023/api-endpoint-restriction.md @@ -16,7 +16,7 @@ We recommend that you change your API requests to https:///api/ :::info Access URLs -dbt Cloud is hosted in multiple regions around the world, and each region has a different access URL. Users on Enterprise plans can choose to have their account hosted in any one of these regions. For a complete list of available dbt Cloud access URLs, refer to [Regions & IP addresses](/docs/cloud/about-cloud/regions-ip-addresses). +dbt Cloud is hosted in multiple regions around the world, and each region has a different access URL. Users on Enterprise plans can choose to have their account hosted in any one of these regions. For a complete list of available dbt Cloud access URLs, refer to [Regions & IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses). ::: diff --git a/website/docs/docs/dbt-versions/release-notes/83-Mar-2023/apiv2-limit.md b/website/docs/docs/dbt-versions/release-notes/83-Mar-2023/apiv2-limit.md index 85c4af48b54..12509bf77f7 100644 --- a/website/docs/docs/dbt-versions/release-notes/83-Mar-2023/apiv2-limit.md +++ b/website/docs/docs/dbt-versions/release-notes/83-Mar-2023/apiv2-limit.md @@ -9,6 +9,6 @@ tags: [Mar-2023, API] To make the API more scalable and reliable, we've implemented a maximum limit of `100` for all API requests to our `list` endpoints. If API requests exceed the maximum limit parameter of `100`, a user will receive an API error message. -This maximum limit applies to [multi-tenant instances](/docs/cloud/about-cloud/regions-ip-addresses) only, and _does not_ apply to single tenant instances. +This maximum limit applies to [multi-tenant instances](/docs/cloud/about-cloud/access-regions-ip-addresses) only, and _does not_ apply to single tenant instances. Refer to the [Pagination](https://docs.getdbt.com/dbt-cloud/api-v2-legacy#section/Pagination) section for more information on this change. diff --git a/website/docs/docs/dbt-versions/release-notes/84-Feb-2023/feb-ide-updates.md b/website/docs/docs/dbt-versions/release-notes/84-Feb-2023/feb-ide-updates.md index 64fa2026d04..7020868197a 100644 --- a/website/docs/docs/dbt-versions/release-notes/84-Feb-2023/feb-ide-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/84-Feb-2023/feb-ide-updates.md @@ -19,7 +19,7 @@ Learn more about the [February changes](https://getdbt.slack.com/archives/C03SAH - Rename files by double-clicking on files in the file tree and the editor tabs - Right-clicking on file tabs has new options and will now open at your cursor instead of in the middle of the tab - The git branch name above **Version Control** links to the repo for specific git providers - * Currently available for all [multi-tenant](/docs/cloud/about-cloud/regions-ip-addresses) instances using GitHub or GitLab providers + * Currently available for all [multi-tenant](/docs/cloud/about-cloud/access-regions-ip-addresses) instances using GitHub or GitLab providers ## Product refinements diff --git a/website/docs/docs/dbt-versions/release-notes/89-Sept-2022/liststeps-endpoint-deprecation.md b/website/docs/docs/dbt-versions/release-notes/89-Sept-2022/liststeps-endpoint-deprecation.md index 545847efd90..9cae773ea3e 100644 --- a/website/docs/docs/dbt-versions/release-notes/89-Sept-2022/liststeps-endpoint-deprecation.md +++ b/website/docs/docs/dbt-versions/release-notes/89-Sept-2022/liststeps-endpoint-deprecation.md @@ -10,6 +10,6 @@ On October 14th, 2022 dbt Labs is deprecating the [List Steps](https://docs.getd dbt Labs will continue to maintain the [Get Run](https://docs.getdbt.com/dbt-cloud/api-v2-legacy#tag/Runs/operation/getRunById) endpoint, which is a viable alternative depending on the use case. -You can fetch run steps for an individual run with a GET request to the following URL, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan: +You can fetch run steps for an individual run with a GET request to the following URL, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan: `https://YOUR_ACCESS_URL/api/v2/accounts/{accountId}/runs/{runId}/?include_related=["run_steps"]` diff --git a/website/docs/docs/dbt-versions/upgrade-core-in-cloud.md b/website/docs/docs/dbt-versions/upgrade-core-in-cloud.md index 052611f66e6..0fc4585be34 100644 --- a/website/docs/docs/dbt-versions/upgrade-core-in-cloud.md +++ b/website/docs/docs/dbt-versions/upgrade-core-in-cloud.md @@ -3,9 +3,9 @@ title: "Upgrade Core version in Cloud" id: "upgrade-core-in-cloud" --- -In dbt Cloud, both jobs and environments are configured to use a specific version of dbt Core. The version can be upgraded at any time. +In dbt Cloud, both [jobs](/docs/deploy/jobs) and [environments](/docs/dbt-cloud-environments) are configured to use a specific version of dbt Core. The version can be upgraded at any time. -### Environments +## Environments Navigate to the settings page of an environment, then click **edit**. Click the **dbt Version** dropdown bar and make your selection. From this list, you can select an available version of Core to associate with this environment. @@ -13,7 +13,28 @@ Navigate to the settings page of an environment, then click **edit**. Click the Be sure to save your changes before navigating away. -### Jobs +### Override dbt version + +Configure your project to use a different dbt Core version than what's configured in your [development environment](/docs/dbt-cloud-environments#types-of-environments). This _override_ only affects your user account, no one else's. Use this to safely test new dbt features before upgrading the dbt version for your projects. + +1. From the gear menu, select **Profile settings**. +1. Choose **Credentials** from the sidebar and select a project. This opens a side panel. +1. In the side panel, click **Edit** and scroll to the **User development settings** section. Choose a version from the **dbt version** dropdown and click **Save**. When saving, dbt Cloud automatically creates a `DBT_DEVELOP_CORE_VERSION` environment variable for this user-level override and lists it in the **Environment variables** section. + + An example of overriding the configured version with 1.7 for the selected project: + + + +1. (Optional) Verify that dbt Cloud will use your override setting to build the project. Invoke `dbt build` in the IDE's command bar. Expand the **System Logs** section and find the output's first line. It should begin with `Running with dbt=` and list the version dbt Cloud is using. + + Example output of a successful `dbt build` run: + + + +1. If you upgrade the version for your development environment, make sure to delete the `DBT_DEVELOP_CORE_VERSION` environment variable from the **Environment variables** section in your project's credentials. + + +## Jobs Each job in dbt Cloud can be configured to inherit parameters from the environment it belongs to. diff --git a/website/docs/docs/deploy/dashboard-status-tiles.md b/website/docs/docs/deploy/dashboard-status-tiles.md index d9e33fc32d6..f1373e36e75 100644 --- a/website/docs/docs/deploy/dashboard-status-tiles.md +++ b/website/docs/docs/deploy/dashboard-status-tiles.md @@ -36,7 +36,7 @@ You can insert these three fields into the following iFrame, and then embed it * :::tip Replace `YOUR_ACCESS_URL` with your region and plan's Access URL -dbt Cloud is hosted in multiple regions in the world and each region has a different access URL. Replace `YOUR_ACCESS_URL` with the appropriate [Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. For example, if your account is hosted in the EMEA region, you would use the following iFrame code: +dbt Cloud is hosted in multiple regions in the world and each region has a different access URL. Replace `YOUR_ACCESS_URL` with the appropriate [Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. For example, if your account is hosted in the EMEA region, you would use the following iFrame code: ``` @@ -71,7 +71,7 @@ https://metadata.YOUR_ACCESS_URL/exposure-tile?name=&jobId=&jobId=&token= @@ -91,7 +91,7 @@ https://metadata.YOUR_ACCESS_URL/exposure-tile?name=&jobId=&jobId=&token= diff --git a/website/docs/docs/deploy/job-notifications.md b/website/docs/docs/deploy/job-notifications.md index 548e34fc2f3..446946d6dfe 100644 --- a/website/docs/docs/deploy/job-notifications.md +++ b/website/docs/docs/deploy/job-notifications.md @@ -52,6 +52,7 @@ Any account admin can edit the Slack notifications but they'll be limited to con ### Prerequisites - You must be an administrator of the Slack workspace. - You must be an account admin to configure Slack notifications in dbt Cloud. For more details, refer to [Users and licenses](/docs/cloud/manage-access/seats-and-users). +- Make sure the notification channel (where you want to receive alerts) is a public channel. The integration only supports public channels in the Slack workspace. ### Set up the Slack integration diff --git a/website/docs/docs/deploy/run-visibility.md b/website/docs/docs/deploy/run-visibility.md index ff9abfa5b0b..0ace26eb5ed 100644 --- a/website/docs/docs/deploy/run-visibility.md +++ b/website/docs/docs/deploy/run-visibility.md @@ -26,7 +26,7 @@ You can view or download in-progress and historical logs for your dbt runs. This ## Model timing -> Available on [multi-tenant](/docs/cloud/about-cloud/regions-ip-addresses) dbt Cloud accounts on the [Team or Enterprise plans](https://www.getdbt.com/pricing/). +> Available on [multi-tenant](/docs/cloud/about-cloud/access-regions-ip-addresses) dbt Cloud accounts on the [Team or Enterprise plans](https://www.getdbt.com/pricing/). The model timing dashboard on dbt Cloud displays the composition, order, and time taken by each model in a job run. The visualization appears for successful jobs and highlights the top 1% of model durations. This helps you identify bottlenecks in your runs, so you can investigate them and potentially make changes to improve their performance. diff --git a/website/docs/docs/deploy/webhooks.md b/website/docs/docs/deploy/webhooks.md index f6c766ab201..e036444c304 100644 --- a/website/docs/docs/deploy/webhooks.md +++ b/website/docs/docs/deploy/webhooks.md @@ -35,7 +35,7 @@ You can also check out the free [dbt Fundamentals course](https://courses.getdbt ## Create a webhook subscription {#create-a-webhook-subscription} -From your **Account Settings** in dbt Cloud (using the gear menu in the top right corner), click **Create New Webhook** in the **Webhooks** section. You can find the appropriate dbt Cloud access URL for your region and plan with [Regions & IP addresses](/docs/cloud/about-cloud/regions-ip-addresses). +From your **Account Settings** in dbt Cloud (using the gear menu in the top right corner), click **Create New Webhook** in the **Webhooks** section. You can find the appropriate dbt Cloud access URL for your region and plan with [Regions & IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses). To configure your new webhook: @@ -167,7 +167,7 @@ An example of a webhook payload for an errored run: You can use the dbt Cloud API to create new webhooks that you want to subscribe to, get detailed information about your webhooks, and to manage the webhooks that are associated with your account. The following sections describe the API endpoints you can use for this. :::info Access URLs -dbt Cloud is hosted in multiple regions in the world and each region has a different access URL. People on Enterprise plans can choose to have their account hosted in any one of these regions. For a complete list of available dbt Cloud access URLs, refer to [Regions & IP addresses](/docs/cloud/about-cloud/regions-ip-addresses). +dbt Cloud is hosted in multiple regions in the world and each region has a different access URL. People on Enterprise plans can choose to have their account hosted in any one of these regions. For a complete list of available dbt Cloud access URLs, refer to [Regions & IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses). ::: ### List all webhook subscriptions diff --git a/website/docs/docs/running-a-dbt-project/using-the-dbt-ide.md b/website/docs/docs/running-a-dbt-project/using-the-dbt-ide.md index f41bceab12d..c772ae89fab 100644 --- a/website/docs/docs/running-a-dbt-project/using-the-dbt-ide.md +++ b/website/docs/docs/running-a-dbt-project/using-the-dbt-ide.md @@ -32,7 +32,7 @@ New dbt Cloud accounts should have developer credentials created automatically a New users on existing accounts *might not* have their development credentials already configured. To manage your development credentials: -1. Navigate to your **Credentials** under **Your Profile** settings, which you can access at `https://YOUR_ACCESS_URL/settings/profile#credentials`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. +1. Navigate to your **Credentials** under **Your Profile** settings, which you can access at `https://YOUR_ACCESS_URL/settings/profile#credentials`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. 2. Select the relevant project in the list. After entering your developer credentials, you'll be able to access the dbt IDE. diff --git a/website/docs/faqs/API/rotate-token.md b/website/docs/faqs/API/rotate-token.md index 4470de72d5a..067df291c72 100644 --- a/website/docs/faqs/API/rotate-token.md +++ b/website/docs/faqs/API/rotate-token.md @@ -34,7 +34,7 @@ curl --location --request POST 'https://YOUR_ACCESS_URL/api/v2/users/YOUR_USER_I * Find your `YOUR_USER_ID` by reading [How to find your user ID](/faqs/Accounts/find-user-id). * Find your `YOUR_CURRENT_TOKEN` by going to **Profile Settings** -> **API Access** and copying the API key. -* Find [`YOUR_ACCESS_URL`](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. +* Find [`YOUR_ACCESS_URL`](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. Example: @@ -53,7 +53,7 @@ curl --location --request POST 'https://cloud.getdbt.com/api/v2/users/123/apikey ### dbt Cloud deployments -If your [dbt Cloud deployment](/docs/cloud/about-cloud/regions-ip-addresses) uses a different access URL, replace `cloud.getdbt.com` with the URL of your instance. +If your [dbt Cloud deployment](/docs/cloud/about-cloud/access-regions-ip-addresses) uses a different access URL, replace `cloud.getdbt.com` with the URL of your instance. For example, if your deployment is Virtual Private dbt: diff --git a/website/docs/faqs/Accounts/transfer-account.md b/website/docs/faqs/Accounts/transfer-account.md index c848547f808..693061c55c6 100644 --- a/website/docs/faqs/Accounts/transfer-account.md +++ b/website/docs/faqs/Accounts/transfer-account.md @@ -10,10 +10,10 @@ You can transfer your dbt Cloud [access control](/docs/cloud/manage-access/about | Account plan| Steps | | ------ | ---------- | -| **Developer** | You can transfer ownership by changing the email directly on your dbt Cloud profile page, which you can access using this URL when you replace `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan: `https://YOUR_ACCESS_URL/settings/profile`. Before doing this, please ensure that you unlink your GitHub profile. | +| **Developer** | You can transfer ownership by changing the email directly on your dbt Cloud profile page, which you can access using this URL when you replace `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan: `https://YOUR_ACCESS_URL/settings/profile`. Before doing this, please ensure that you unlink your GitHub profile. | | **Team** | Existing account admins with account access can add users to, or remove users from the owner group. | | **Enterprise** | Account admins can add users to, or remove users from a group with Account Admin permissions. | -| **If all account owners left the company** | If the account owner has left your organization, you will need to work with _your_ IT department to have incoming emails forwarded to the new account owner. Once your IT department has redirected the emails, you can request to reset the user password. Once you log in, you can change the email on the Profile page when you replace `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan: `https://YOUR_ACCESS_URL/settings/profile`. | +| **If all account owners left the company** | If the account owner has left your organization, you will need to work with _your_ IT department to have incoming emails forwarded to the new account owner. Once your IT department has redirected the emails, you can request to reset the user password. Once you log in, you can change the email on the Profile page when you replace `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan: `https://YOUR_ACCESS_URL/settings/profile`. | When you make any account owner and email changes: diff --git a/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md b/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md index a2967ccbe15..3e388949d59 100644 --- a/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md +++ b/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md @@ -126,7 +126,7 @@ if __name__ == '__main__': 3. Replace **``** and **``** with the values you used [previously](#set-up-a-databricks-secret-scope) -4. Replace **``** and **``** with the correct values of your environment and [Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan. +4. Replace **``** and **``** with the correct values of your environment and [Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan. * To find these values, navigate to **dbt Cloud**, select **Deploy -> Jobs**. Select the Job you want to run and copy the URL. For example: `https://cloud.getdbt.com/deploy/000000/projects/111111/jobs/222222` and therefore valid code would be: diff --git a/website/docs/guides/sl-partner-integration-guide.md b/website/docs/guides/sl-partner-integration-guide.md index 7eb158a2c85..21ea822389f 100644 --- a/website/docs/guides/sl-partner-integration-guide.md +++ b/website/docs/guides/sl-partner-integration-guide.md @@ -52,7 +52,7 @@ Best practices for exposing metrics are summarized into five themes: - [Governance](#governance-and-traceability) — Recommendations on how to establish guardrails for governed data work. - [Discoverability](#discoverability) — Recommendations on how to make user-friendly data interactions. -- [Organization](#organization) — Organize metrics and dimensions for all audiences. +- [Organization](#organization) — Organize metrics and dimensions for all audiences, use [saved queries](/docs/build/saved-queries). - [Query flexibility](#query-flexibility) — Allow users to query either one metric alone without dimensions or multiple metrics with dimensions. - [Context and interpretation](#context-and-interpretation) — Contextualize metrics for better analysis; expose definitions, metadata, lineage, and freshness. @@ -73,13 +73,13 @@ When working with more governed data, it's essential to establish clear guardrai - Consider treating [metrics](/docs/build/metrics-overview) as first-class objects rather than measures. Metrics offer a higher-level and more contextual way to interact with data, reducing the burden on end-users to manually aggregate data. -- Easy metric interactions: Provide users with an intuitive approach to: +- **Easy metric interactions** — Provide users with an intuitive approach to: * Search for Metrics — Users should be able to easily search and find relevant metrics. Metrics can serve as the starting point to lead users into exploring dimensions. * Search for Dimensions — Users should be able to query metrics with associated dimensions, allowing them to gain deeper insights into the data. * Filter by Dimension Values — Expose and enable users to filter metrics based on dimension values, encouraging data analysis and exploration. * Filter additional metadata — Allow users to filter metrics based on other available metadata, such as metric type and default time granularity. -- Suggested Metrics: Ideally, the system should intelligently suggest relevant metrics to users based on their team's activities. This approach encourages user exposure, facilitates learning, and supports collaboration among team members. +- **Suggested metrics** — Ideally, the system should intelligently suggest relevant metrics to users based on their team's activities. This approach encourages user exposure, facilitates learning, and supports collaboration among team members. By implementing these recommendations, the data interaction process becomes more user-friendly, empowering users to gain valuable insights without the need for extensive data manipulation. @@ -87,9 +87,11 @@ By implementing these recommendations, the data interaction process becomes more We recommend organizing metrics and dimensions in ways that a non-technical user can understand the data model, without needing much context: -- **Organizing Dimensions** — To help non-technical users understand the data model better, we recommend organizing dimensions based on the entity they originated from. For example, consider dimensions like `user__country` and `product__category`.

You can create groups by extracting `user` and `product` and then nest the respective dimensions under each group. This way, dimensions align with the entity or semantic model they belong to and make them more user-friendly and accessible. +- **Organizing dimensions** — To help non-technical users understand the data model better, we recommend organizing dimensions based on the entity they originated from. For example, consider dimensions like `user__country` and `product__category`.

You can create groups by extracting `user` and `product` and then nest the respective dimensions under each group. This way, dimensions align with the entity or semantic model they belong to and make them more user-friendly and accessible. -- **Organizing Metrics** — The goal is to organize metrics into a hierarchy in our configurations, instead of presenting them in a long list.

This hierarchy helps you organize metrics based on specific criteria, such as business unit or team. By providing this structured organization, users can find and navigate metrics more efficiently, enhancing their overall data analysis experience. +- **Organizing metrics** — The goal is to organize metrics into a hierarchy in our configurations, instead of presenting them in a long list.

This hierarchy helps you organize metrics based on specific criteria, such as business unit or team. By providing this structured organization, users can find and navigate metrics more efficiently, enhancing their overall data analysis experience. + +- **Using Saved queries** — The dbt Semantic Layer has a concept of [saved queries](/docs/build/saved-queries) which allows users to pre-build slices of metrics, dimensions, filters to be easily accessed. You should surface these as first class objects in your integration. Refer to the [JDBC](/docs/dbt-cloud-apis/sl-jdbc) and [GraphQL](/docs/dbt-cloud-apis/sl-graphql) APIs for syntax. ### Query flexibility @@ -102,7 +104,11 @@ Allow users to query either one metric alone without dimensions or multiple metr - Only expose time granularities (monthly, daily, yearly) that match the available metrics. * For example, if a dbt model and its resulting semantic model have a monthly granularity, make sure querying data with a 'daily' granularity isn't available to the user. Our APIs have functionality that will help you surface the correct granularities -- We recommend that time granularity is treated as a general time dimension-specific concept and that it can be applied to more than just the primary aggregation (or `metric_time`). Consider a situation where a user wants to look at `sales` over time by `customer signup month`; in this situation, having the ability to apply granularities to both time dimensions is crucial. Our APIs include information to fetch the granularities for the primary (metric_time) dimensions, as well as all time dimensions. You can treat each time dimension and granularity selection independently in your application. Note: Initially, as a starting point, it makes sense to only support `metric_time` or the primary time dimension, but we recommend expanding that as your solution evolves. +- We recommend that time granularity is treated as a general time dimension-specific concept and that it can be applied to more than just the primary aggregation (or `metric_time`). + + Consider a situation where a user wants to look at `sales` over time by `customer signup month`; in this situation, having the ability to apply granularities to both time dimensions is crucial. Our APIs include information to fetch the granularities for the primary (metric_time) dimensions, as well as all time dimensions. + + You can treat each time dimension and granularity selection independently in your application. Note: Initially, as a starting point, it makes sense to only support `metric_time` or the primary time dimension, but we recommend expanding that as your solution evolves. - You should allow users to filter on date ranges and expose a calendar and nice presets for filtering these. * For example, last 30 days, last week, and so on. @@ -142,6 +148,7 @@ These are recommendations on how to evolve a Semantic Layer integration and not * Listing available dimensions based on one or many metrics * Querying defined metric values on their own or grouping by available dimensions * Display metadata from [Discovery API](/docs/dbt-cloud-apis/discovery-api) and other context +* Expose [Saved queries](/docs/build/saved-queries), which are pre-built metrics, dimensions, and filters that Semantic Layer developers create for easier analysis. You can expose them in your application. Refer to the [JDBC](/docs/dbt-cloud-apis/sl-jdbc) and [GraphQL](/docs/dbt-cloud-apis/sl-graphql) APIs for syntax. **Stage 3 - More querying flexibility and better user experience (UX)** * More advanced filtering diff --git a/website/docs/guides/starburst-galaxy-qs.md b/website/docs/guides/starburst-galaxy-qs.md index 1822c83fa90..c928d37ae1a 100644 --- a/website/docs/guides/starburst-galaxy-qs.md +++ b/website/docs/guides/starburst-galaxy-qs.md @@ -28,7 +28,7 @@ You can also watch the [Build Better Data Pipelines with dbt and Starburst](http ### Prerequisites -- You have a [multi-tenant](/docs/cloud/about-cloud/regions-ip-addresses) deployment in [dbt Cloud](https://www.getdbt.com/signup/). For more information, refer to [Tenancy](/docs/cloud/about-cloud/tenancy). +- You have a [multi-tenant](/docs/cloud/about-cloud/access-regions-ip-addresses) deployment in [dbt Cloud](https://www.getdbt.com/signup/). For more information, refer to [Tenancy](/docs/cloud/about-cloud/tenancy). - You have a [Starburst Galaxy account](https://www.starburst.io/platform/starburst-galaxy/). If you don't, you can start a free trial. Refer to the [getting started guide](https://docs.starburst.io/starburst-galaxy/get-started.html) in the Starburst Galaxy docs for further setup details. - You have an AWS account with permissions to upload data to an S3 bucket. - For Amazon S3 authentication, you will need either an AWS access key and AWS secret key with access to the bucket, or you will need a cross account IAM role with access to the bucket. For details, refer to these Starburst Galaxy docs: diff --git a/website/docs/reference/resource-configs/unique_key.md b/website/docs/reference/resource-configs/unique_key.md index 4e2409bb618..9ad3417fd5e 100644 --- a/website/docs/reference/resource-configs/unique_key.md +++ b/website/docs/reference/resource-configs/unique_key.md @@ -27,11 +27,11 @@ snapshots:
## Description -A column name or expression that is unique for the results of a snapshot. dbt uses this to match records between a result set and an existing snapshot, so that changes can be captured correctly. +A column name or expression that is unique for the inputs of a snapshot. dbt uses this to match records between a result set and an existing snapshot, so that changes can be captured correctly. :::caution -Providing a non-unique key will result in unexpected snapshot results. dbt **will not** test the uniqueness of this key, consider adding a test to your project to ensure that this key is indeed unique. +Providing a non-unique key will result in unexpected snapshot results. dbt **will not** test the uniqueness of this key, consider [testing](/blog/primary-key-testing#how-to-test-primary-keys-with-dbt) the source data to ensure that this key is indeed unique. ::: diff --git a/website/sidebars.js b/website/sidebars.js index d3d2f191558..6429c4679a2 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -28,7 +28,7 @@ const sidebarSettings = { "docs/cloud/about-cloud/dbt-cloud-features", "docs/cloud/about-cloud/architecture", "docs/cloud/about-cloud/tenancy", - "docs/cloud/about-cloud/regions-ip-addresses", + "docs/cloud/about-cloud/access-regions-ip-addresses", "docs/cloud/about-cloud/browsers", ], }, // About dbt Cloud directory diff --git a/website/snippets/_sl-plan-info.md b/website/snippets/_sl-plan-info.md index fe4e6024226..71bffba5b1d 100644 --- a/website/snippets/_sl-plan-info.md +++ b/website/snippets/_sl-plan-info.md @@ -1,2 +1,2 @@ -To define and query metrics with the {props.product}, you must be on a {props.plan} account. Suitable for both Multi-tenant and Single-tenant accounts. Note: Single-tenant accounts should contact their account representative for necessary setup and enablement.

+To define and query metrics with the {props.product}, you must be on a {props.plan} account. Suitable for both Multi-tenant and Single-tenant accounts. Note: Single-tenant accounts should contact their account representative for necessary setup and enablement.

diff --git a/website/snippets/login_url_note.md b/website/snippets/login_url_note.md index a46648ea9c6..65c1b6c16ca 100644 --- a/website/snippets/login_url_note.md +++ b/website/snippets/login_url_note.md @@ -1,5 +1,5 @@ :::success Logging in -Users can now log into the dbt Cloud by navigating to the following URL, replacing `LOGIN-SLUG` with the value used in the previous steps and `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your region and plan: +Users can now log into the dbt Cloud by navigating to the following URL, replacing `LOGIN-SLUG` with the value used in the previous steps and `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan: `https://YOUR_ACCESS_URL/enterprise-login/LOGIN-SLUG` ::: diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-override-version.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-override-version.png new file mode 100644 index 00000000000..673311431d3 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-override-version.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-verify-overridden-version.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-verify-overridden-version.png new file mode 100644 index 00000000000..a6e553a0b2e Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/choosing-dbt-version/example-verify-overridden-version.png differ diff --git a/website/static/img/docs/dbt-cloud/find-account.png b/website/static/img/docs/dbt-cloud/find-account.png new file mode 100644 index 00000000000..8d9bb5c21d2 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/find-account.png differ diff --git a/website/vercel.json b/website/vercel.json index 1e4cc2fb021..9da721dc112 100644 --- a/website/vercel.json +++ b/website/vercel.json @@ -2,6 +2,11 @@ "cleanUrls": true, "trailingSlash": false, "redirects": [ + { + "source": "/docs/cloud/about-cloud/regions-ip-addresses", + "destination": "/docs/cloud/about-cloud/accesss-regions-ip-addresses", + "permanent": true + }, { "source": "/reference/profiles.yml", "destination": "/docs/core/connect-data-platform/profiles.yml",