Skip to content

Commit

Permalink
Merge branch 'current' into remove-st
Browse files Browse the repository at this point in the history
  • Loading branch information
mirnawong1 authored Nov 27, 2024
2 parents 75be3eb + c0672e2 commit ce2c306
Show file tree
Hide file tree
Showing 17 changed files with 45 additions and 50 deletions.
1 change: 0 additions & 1 deletion website/dbt-versions.js
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ exports.versions = [
},
{
version: "1.9",
isPrerelease: true,
},
{
version: "1.8",
Expand Down
4 changes: 2 additions & 2 deletions website/docs/docs/build/metricflow-time-spine.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,8 +179,8 @@ final as (
select *
from final
-- filter the time spine to a specific range
where date_day > dateadd(year, -4, current_timestamp())
and date_day < dateadd(day, 30, current_timestamp())
where date_day > date_add(DATE(current_timestamp()), INTERVAL -4 YEAR)
and date_day < date_add(DATE(current_timestamp()), INTERVAL 30 DAY)
```

</File>
Expand Down
13 changes: 1 addition & 12 deletions website/docs/docs/build/python-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -673,18 +673,7 @@ def model(dbt, session: snowpark.Session):
</VersionBlock>
**About "sprocs":** dbt submits Python models to run as _stored procedures_, which some people call _sprocs_ for short. By default, dbt will create a named sproc containing your model's compiled Python code, and then _call_ it to execute. Snowpark has an Open Preview feature for _temporary_ or _anonymous_ stored procedures ([docs](https://docs.snowflake.com/en/sql-reference/sql/call-with.html)), which are faster and leave a cleaner query history. You can switch this feature on for your models by configuring `use_anonymous_sproc: True`. We plan to switch this on for all dbt + Snowpark Python models starting with the release of dbt Core version 1.4.
<File name='dbt_project.yml'>
```yml
# I asked Snowflake Support to enable this Private Preview feature,
# and now my dbt-py models run even faster!
models:
use_anonymous_sproc: True
```

</File>
**About "sprocs":** dbt submits Python models to run as _stored procedures_, which some people call _sprocs_ for short. By default, dbt will use Snowpark's _temporary_ or _anonymous_ stored procedures ([docs](https://docs.snowflake.com/en/sql-reference/sql/call-with.html)), which are faster and keep query history cleaner than named sprocs containing your model's compiled Python code. To disable this feature, set `use_anonymous_sproc: False` in your model configuration.
**Docs:** ["Developer Guide: Snowpark Python"](https://docs.snowflake.com/en/developer-guide/snowpark/python/index.html)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ To access the features, you should meet the following:
4. You have [admin permissions](/docs/cloud/manage-access/enterprise-permissions) in dbt Cloud to edit project settings or production environment settings.
5. Use Tableau as your BI tool and enable metadata permissions or work with an admin to do so. Compatible with Tableau Cloud or Tableau Server with the Metadata API enabled.
- If you're using Tableau Server, you need to [allowlist dbt Cloud's IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) for your dbt Cloud region.
- Currently, you can only connect to a single Tableau site on the same server.

## Set up in Tableau

Expand Down
12 changes: 0 additions & 12 deletions website/docs/docs/cloud/manage-access/auth0-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,22 +5,10 @@ sidebar: "SSO Auth0 Migration"
description: "Required actions for migrating to Auth0 for SSO services on dbt Cloud."
---

:::note

This migration is a feature of the dbt Cloud Enterprise plan. To learn more about an Enterprise plan, contact us at [[email protected]](mailto::[email protected]).

For single-tenant Virtual Private Cloud, you should [email dbt Cloud Support](mailto::[email protected]) to set up or update your SSO configuration.

:::

dbt Labs is partnering with Auth0 to bring enhanced features to dbt Cloud's single sign-on (SSO) capabilities. Auth0 is an identity and access management (IAM) platform with advanced security features, and it will be leveraged by dbt Cloud. These changes will require some action from customers with SSO configured in dbt Cloud today, and this guide will outline the necessary changes for each environment.

If you have not yet configured SSO in dbt Cloud, refer instead to our setup guides for [SAML](/docs/cloud/manage-access/set-up-sso-saml-2.0), [Okta](/docs/cloud/manage-access/set-up-sso-okta), [Google Workspace](/docs/cloud/manage-access/set-up-sso-google-workspace), or [Microsoft Entra ID (formerly Azure AD)](/docs/cloud/manage-access/set-up-sso-microsoft-entra-id) single sign-on services.

## Auth0 Multi-tenant URIs

<Snippet path="auth0-uri" />

## Start the migration

The Auth0 migration feature is being rolled out incrementally to customers who have SSO features already enabled. When the migration option has been enabled on your account, you will see **SSO Updates Available** on the right side of the menu bar, near the settings icon.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ Environment-level permissions give dbt Cloud admins more flexibility to protect

- Environment-level permissions do not allow you to create custom roles and permissions for each resource type in dbt Cloud.
- You can only select environment types, and can’t specify a particular environment within a project.
- You can't select specific resources within environments. dbt Cloud jobs, runs, and environment variables are all environment resources.
- For example, you can't specify that a user only has access to jobs but not environment variables. Access to a given environment gives the user access to everything within that environment.
- You can't select specific resources within environments. dbt Cloud jobs and runs are environment resources.
- For example, you can't specify that a user only has access to jobs but not runs. Access to a given environment gives the user access to everything within that environment.

## Environments and roles

Expand Down
6 changes: 3 additions & 3 deletions website/docs/docs/collaborate/auto-exposures.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ image: /img/docs/cloud-integrations/auto-exposures/explorer-lineage.jpg

# Auto-exposures <Lifecycle status="preview,enterprise" />

As a data team, it’s critical that you have context into the downstream use cases and users of your data products. Auto-exposures integrates natively with Tableau (Power BI coming soon) and auto-generates downstream lineage in dbt Explorer for a richer experience.
As a data team, it’s critical that you have context into the downstream use cases and users of your data products. Auto-exposures integrate natively with Tableau (Power BI coming soon) and auto-generate downstream lineage in dbt Explorer for a richer experience.

Auto-exposures helps users understand how their models are used in downstream analytics tools to inform investments and reduce incidents — ultimately building trust and confidence in data products. It imports and auto-generates exposures based on Tableau dashboards, with user-defined curation.
Auto-exposures help users understand how their models are used in downstream analytics tools to inform investments and reduce incidents — ultimately building trust and confidence in data products. It imports and auto-generates exposures based on Tableau dashboards, with user-defined curation.

## Supported plans
Auto-exposures is available on [Versionless](/docs/dbt-versions/versionless-cloud) and for [dbt Cloud Enterprise](https://www.getdbt.com/pricing/) plans.
Auto-exposures is available on [Versionless](/docs/dbt-versions/versionless-cloud) and [dbt Cloud Enterprise](https://www.getdbt.com/pricing/) plans. Currently, you can only connect to a single Tableau site on the same server.

:::info Tableau Server
If you're using Tableau Server, you need to [allowlist dbt Cloud's IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) for your dbt Cloud region.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Upgrading to v1.9 (beta)"
title: "Upgrading to v1.9"
id: upgrading-to-v1.9
description: New features and changes in dbt Core v1.9
displayed_sidebar: "docs"
Expand Down Expand Up @@ -29,7 +29,8 @@ Features and functionality new in dbt v1.9.
### Microbatch `incremental_strategy`

:::info
While microbatch is in "beta", this functionality is still gated behind an env var, which will change to a behavior flag when 1.9 is GA. To use microbatch, set `DBT_EXPERIMENTAL_MICROBATCH` to `true` wherever you're running dbt Core.

If you use a custom microbatch macro, set the [`require_batched_execution_for_custom_microbatch_strategy`](/reference/global-configs/behavior-changes#custom-microbatch-strategy) behavior flag in your `dbt_project.yml` to enable batched execution. If you don't have a custom microbatch macro, you don't need to set this flag as dbt will handle microbatching automatically for any model using the microbatch strategy.
:::

Incremental models are, and have always been, a *performance optimization* — for datasets that are too large to be dropped and recreated from scratch every time you do a `dbt run`. Learn more about [incremental models](/docs/build/incremental-models-overview).
Expand Down Expand Up @@ -83,6 +84,7 @@ You can read more about each of these behavior changes in the following links:
- (Introduced, disabled by default) [`skip_nodes_if_on_run_start_fails` project config flag](/reference/global-configs/behavior-changes#behavior-change-flags). If the flag is set and **any** `on-run-start` hook fails, mark all selected nodes as skipped.
- `on-run-start/end` hooks are **always** run, regardless of whether they passed or failed last time.
- (Introduced, disabled by default) [[Redshift] `restrict_direct_pg_catalog_access`](/reference/global-configs/behavior-changes#redshift-restrict_direct_pg_catalog_access). If the flag is set the adapter will use the Redshift API (through the Python client) if available, or query Redshift's `information_schema` tables instead of using `pg_` tables.
- (Introduced, disabled by default) [`require_batched_execution_for_custom_microbatch_strategy`](/reference/global-configs/behavior-changes#custom-microbatch-strategy). Set to `True` if you use a custom microbatch macro to enable batched execution. If you don't have a custom microbatch macro, you don't need to set this flag as dbt will handle microbatching automatically for any model using the microbatch strategy.

## Adapter specific features and functionalities

Expand All @@ -92,7 +94,7 @@ You can read more about each of these behavior changes in the following links:

### Snowflake

- Iceberg Table Format support will be available on three out of the box materializations: table, incremental, dynamic tables.
- Iceberg Table Format support will be available on three out-of-the-box materializations: table, incremental, dynamic tables.

### Bigquery

Expand All @@ -107,7 +109,7 @@ You can read more about each of these behavior changes in the following links:

We also made some quality-of-life improvements in Core 1.9, enabling you to:

- Maintain data quality now that dbt returns an an error (versioned models) or warning (unversioned models) when someone [removes a contracted model by deleting, renaming, or disabling](/docs/collaborate/govern/model-contracts#how-are-breaking-changes-handled) it.
- Maintain data quality now that dbt returns an error (versioned models) or warning (unversioned models) when someone [removes a contracted model by deleting, renaming, or disabling](/docs/collaborate/govern/model-contracts#how-are-breaking-changes-handled) it.
- Document [data tests](/reference/resource-properties/description).
- Use `ref` and `source` in [foreign key constraints](/reference/resource-properties/constraints).
- Use `dbt test` with the `--resource-type` / `--exclude-resource-type` flag, making it possible to include or exclude data tests (`test`) or unit tests (`unit_test`).
1 change: 1 addition & 0 deletions website/docs/docs/dbt-versions/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ Release notes are grouped by month for both multi-tenant and virtual private clo
\* The official release date for this new format of release notes is May 15th, 2024. Historical release notes for prior dates may not reflect all available features released earlier this year or their tenancy availability.

## November 2024
- **Fix**: Job environment variable overrides in credentials are now respected for Exports. Previously, they were ignored.
- **Behavior change**: If you use a custom microbatch macro, set a [`require_batched_execution_for_custom_microbatch_strategy` behavior flag](/reference/global-configs/behavior-changes#custom-microbatch-strategy) in your `dbt_project.yml` to enable batched execution. If you don't have a custom microbatch macro, you don't need to set this flag as dbt will handle microbatching automatically for any model using the [microbatch strategy](/docs/build/incremental-microbatch#how-microbatch-compares-to-other-incremental-strategies).
- **Enhancement**: For users that have Advanced CI's [compare changes](/docs/deploy/advanced-ci#compare-changes) feature enabled, you can optimize performance when running comparisons by using custom dbt syntax to customize deferral usage, exclude specific large models (or groups of models with tags), and more. Refer to [Compare changes custom commands](/docs/deploy/job-commands#compare-changes-custom-commands) for examples of how to customize the comparison command.
- **New**: SQL linting in CI jobs is now generally available in dbt Cloud. You can enable SQL linting in your CI jobs, using [SQLFluff](https://sqlfluff.com/), to automatically lint all SQL files in your project as a run step before your CI job builds. SQLFluff linting is available on [dbt Cloud Versionless](/docs/dbt-versions/versionless-cloud) and to dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) accounts. Refer to [SQL linting](/docs/deploy/continuous-integration#sql-linting) for more information.
Expand Down
4 changes: 2 additions & 2 deletions website/docs/docs/deploy/merge-jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: "Learn how to trigger a dbt job run when a Git pull request merges.
---


You can set up a merge job to implement a continuous development (CD) workflow in dbt Cloud. The merge job triggers a dbt job to run when someone merges Git pull requests into production. This workflow creates a seamless development experience where changes made in code will automatically update production data. Also, you can use this workflow for running `dbt compile` to update your environment's manifest so subsequent CI job runs are more performant.
You can set up a merge job to implement a continuous deployment (CD) workflow in dbt Cloud. The merge job triggers a dbt job to run when someone merges Git pull requests into production. This workflow creates a seamless development experience where changes made in code will automatically update production data. Also, you can use this workflow for running `dbt compile` to update your environment's manifest so subsequent CI job runs are more performant.

By using CD in dbt Cloud, you can take advantage of deferral to build only the edited model and any downstream changes. With merge jobs, state will be updated almost instantly, always giving the most up-to-date state information in [dbt Explorer](/docs/collaborate/explore-projects).

Expand Down Expand Up @@ -62,4 +62,4 @@ The following is an example of creating a new **Code pushed** trigger in Azure D

<Lightbox src="/img/docs/dbt-cloud/using-dbt-cloud/example-azuredevops-new-event.png" title="Example of creating a new trigger to push events in Azure Devops"/>

</Expandable>
</Expandable>
9 changes: 7 additions & 2 deletions website/docs/reference/dbt-classes.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,9 +98,14 @@ col.numeric_type('numeric', 12, 4) # numeric(12,4)

### Properties

- **name**: Returns the name of the column
- **char_size**: Returns the maximum size for character varying columns
- **column**: Returns the name of the column
- **data_type**: Returns the data type of the column (with size/precision/scale included)
- **dtype**: Returns the data type of the column (without any size/precision/scale included)
- **name**: Returns the name of the column (identical to `column`, provided as an alias).
- **numeric_precision**: Returns the maximum precision for fixed decimal columns
- **numeric_scale**: Returns the maximum scale for fixed decimal columns
- **quoted**: Returns the name of the column wrapped in quotes
- **data_type**: Returns the data type of the column

### Instance methods

Expand Down
25 changes: 17 additions & 8 deletions website/docs/reference/global-configs/logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,19 +66,28 @@ See [structured logging](/reference/events-logging#structured-logging) for more

The `LOG_LEVEL` config sets the minimum severity of events captured in the console and file logs. This is a more flexible alternative to the `--debug` flag. The available options for the log levels are `debug`, `info`, `warn`, `error`, or `none`.

Setting the `--log-level` will configure console and file logs.
- Setting the `--log-level` will configure console and file logs.

```text
dbt --log-level debug run
```

```text
dbt --log-level debug run
```
- Setting the `LOG_LEVEL` to `none` will disable information from being sent to either the console or file logs.

```text
dbt --log-level none
```

To set the file log level as a different value than the console, use the `--log-level-file` flag.
- To set the file log level as a different value than the console, use the `--log-level-file` flag.

```text
dbt --log-level-file error run
```

```text
dbt --log-level-file error run
```
- To only disable writing to the logs file but keep console logs, set `LOG_LEVEL_FILE` config to none.
```text
dbt --log-level-file none
```

### Debug-level logging

Expand Down
2 changes: 1 addition & 1 deletion website/snippets/_enterprise-permissions-table.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Key:
| Custom env. variables | W | W | W | W | W | W | - | R | - | - | R | W | - |
| Data platform configs | W | W | W | W | R | W | - | - | - | - | R | R | - |
| Develop (IDE or CLI) | W | W | - | W | - | - | - | - | - | - | - | - | - |
| Environments | W | R* | R* | R* | R* | W | - | R | - | - | R | R* | - |
| Environments | W | R | R | R | R | W | - | R | - | - | R | R | - |
| Jobs | W | R* | R* | R* | R* | W | R | R | - | - | R | R* | - |
| Metadata GraphQL API access| R | R | R | R | R | R | - | R | R | - | R | R | - |
| Permissions | W | - | R | R | R | - | - | - | - | - | - | R | - |
Expand Down
3 changes: 2 additions & 1 deletion website/snippets/core-versions-table.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@

| dbt Core | Initial release | Support level and end date |
|:-------------------------------------------------------------:|:---------------:|:-------------------------------------:|
| [**v1.8**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.8) | May 9 2024 | <b>Active Support &mdash; May 8, 2025</b> |
| [**v1.9**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.9) | Release candidate | TBA |
| [**v1.8**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.8) | May 9 2024 | <b>Active Support &mdash; May 8, 2025</b>|
| [**v1.7**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.7) | Nov 2, 2023 | <div align="left">**dbt Core and dbt Cloud Developer & Team customers:** End of Life <br /> **dbt Cloud Enterprise customers:** Critical Support until further notice <sup>1</sup></div> |
| [**v1.6**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.6) | Jul 31, 2023 | End of Life ⚠️ |
| [**v1.5**](/docs/dbt-versions/core-upgrade/upgrading-to-v1.5) | Apr 27, 2023 | End of Life ⚠️ |
Expand Down
Binary file not shown.
Binary file not shown.
Binary file modified website/static/img/docs/dbt-cloud/cloud-ide/dbt-copilot-doc.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit ce2c306

Please sign in to comment.