-#### Copying ingestion-time partitions
+#### Copying partitions
-If you have configured your incremental model to use "ingestion"-based partitioning (`partition_by.time_ingestion_partitioning: True`), you can opt to use a legacy mechanism for inserting and overwriting partitions. While this mechanism doesn't offer the same visibility and ease of debugging as the SQL `merge` statement, it can yield significant savings in time and cost for large datasets. Behind the scenes, dbt will add or replace each partition via the [copy table API](https://cloud.google.com/bigquery/docs/managing-tables#copy-table) and partition decorators.
+If you are replacing entire partitions in your incremental runs, you can opt to do so with the [copy table API](https://cloud.google.com/bigquery/docs/managing-tables#copy-table) and partition decorators rather than a `merge` statement. While this mechanism doesn't offer the same visibility and ease of debugging as the SQL `merge` statement, it can yield significant savings in time and cost for large datasets because the copy table API does not incur any costs for inserting the data - it's equivalent to the `bq cp` gcloud command line interface (CLI) command.
You can enable this by switching on `copy_partitions: True` in the `partition_by` configuration. This approach works only in combination with "dynamic" partition replacement.
diff --git a/website/docs/reference/resource-properties/constraints.md b/website/docs/reference/resource-properties/constraints.md
index 5ec12b100d7..8a8e46f2fa3 100644
--- a/website/docs/reference/resource-properties/constraints.md
+++ b/website/docs/reference/resource-properties/constraints.md
@@ -300,7 +300,7 @@ select
-BigQuery allows defining `not null` constraints. However, it does _not_ support or enforce the definition of unenforced constraints, such as `primary key`.
+BigQuery allows defining and enforcing `not null` constraints, and defining (but _not_ enforcing) `primary key` and `foreign key` constraints (which can be used for query optimization). BigQuery does not support defining or enforcing other constraints. For more information, refer to [Platform constraint support](/docs/collaborate/govern/model-contracts#platform-constraint-support)
Documentation: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language
diff --git a/website/sidebars.js b/website/sidebars.js
index 89b1e005a8c..0566ef8c3a6 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -324,6 +324,7 @@ const sidebarSettings = {
link: { type: "doc", id: "docs/build/metrics-overview" },
items: [
"docs/build/metrics-overview",
+ "docs/build/conversion",
"docs/build/cumulative",
"docs/build/derived",
"docs/build/ratio",
diff --git a/website/snippets/_new-sl-setup.md b/website/snippets/_new-sl-setup.md
index a02481db33d..a93f233d09c 100644
--- a/website/snippets/_new-sl-setup.md
+++ b/website/snippets/_new-sl-setup.md
@@ -1,14 +1,12 @@
You can set up the dbt Semantic Layer in dbt Cloud at the environment and project level. Before you begin:
-- You must have a dbt Cloud Team or Enterprise account. Suitable for both Multi-tenant and Single-tenant deployment.
- - Single-tenant accounts should contact their account representative for necessary setup and enablement.
- You must be part of the Owner group, and have the correct [license](/docs/cloud/manage-access/seats-and-users) and [permissions](/docs/cloud/manage-access/self-service-permissions) to configure the Semantic Layer:
* Enterprise plan — Developer license with Account Admin permissions. Or Owner with a Developer license, assigned Project Creator, Database Admin, or Admin permissions.
* Team plan — Owner with a Developer license.
- You must have a successful run in your new environment.
:::tip
-If you've configured the legacy Semantic Layer, it has been deprecated, and dbt Labs strongly recommends that you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt version 1.6 or higher to use the latest dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/sl-migration) for details.
+If you've configured the legacy Semantic Layer, it has been deprecated. dbt Labs strongly recommends that you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt version 1.6 or higher to use the latest dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/sl-migration) for details.
:::
1. In dbt Cloud, create a new [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) or use an existing environment on dbt 1.6 or higher.
@@ -20,7 +18,10 @@ If you've configured the legacy Semantic Layer, it has been deprecated, and dbt
-4. In the **Set Up Semantic Layer Configuration** page, enter the credentials you want the Semantic Layer to use specific to your data platform. We recommend credentials have the least privileges required because your Semantic Layer users will be querying it in downstream applications. At a minimum, the Semantic Layer needs to have read access to the schema(s) that contains the dbt models that you used to build your semantic models.
+4. In the **Set Up Semantic Layer Configuration** page, enter the credentials you want the Semantic Layer to use specific to your data platform.
+
+ - Use credentials with minimal privileges. This is because the Semantic Layer requires read access to the schema(s) containing the dbt models used in your semantic models for downstream applications
+ - Note, [Environment variables](/docs/build/environment-variables) such as `{{env_var('DBT_WAREHOUSE')}`, doesn't supported the dbt Semantic Layer yet. You must use the actual credentials.
@@ -28,13 +29,10 @@ If you've configured the legacy Semantic Layer, it has been deprecated, and dbt
6. After saving it, you'll be provided with the connection information that allows you to connect to downstream tools. If your tool supports JDBC, save the JDBC URL or individual components (like environment id and host). If it uses the GraphQL API, save the GraphQL API host information instead.
-
+
7. Save and copy your environment ID, service token, and host, which you'll need to use downstream tools. For more info on how to integrate with partner integrations, refer to [Available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations).
8. Return to the **Project Details** page, then select **Generate Service Token**. You will need Semantic Layer Only and Metadata Only [service token](/docs/dbt-cloud-apis/service-tokens) permissions.
-
-
-Great job, you've configured the Semantic Layer 🎉!
-
+Great job, you've configured the Semantic Layer 🎉!
diff --git a/website/snippets/_sl-define-metrics.md b/website/snippets/_sl-define-metrics.md
index af3ee9f297f..fe169b4a5b4 100644
--- a/website/snippets/_sl-define-metrics.md
+++ b/website/snippets/_sl-define-metrics.md
@@ -1,6 +1,6 @@
Now that you've created your first semantic model, it's time to define your first metric! You can define metrics with the dbt Cloud IDE or command line.
-MetricFlow supports different metric types like [simple](/docs/build/simple), [ratio](/docs/build/ratio), [cumulative](/docs/build/cumulative), and [derived](/docs/build/derived). It's recommended that you read the [metrics overview docs](/docs/build/metrics-overview) before getting started.
+MetricFlow supports different metric types like [conversion](/docs/build/conversion), [simple](/docs/build/simple), [ratio](/docs/build/ratio), [cumulative](/docs/build/cumulative), and [derived](/docs/build/derived). It's recommended that you read the [metrics overview docs](/docs/build/metrics-overview) before getting started.
1. You can define metrics in the same YAML files as your semantic models or create a new file. If you want to create your metrics in a new file, create another directory called `/models/metrics`. The file structure for metrics can become more complex from here if you need to further organize your metrics, for example, by data source or business line.
diff --git a/website/static/img/blog/authors/ejohnston.png b/website/static/img/blog/authors/ejohnston.png
new file mode 100644
index 00000000000..09fc4ed7ba3
Binary files /dev/null and b/website/static/img/blog/authors/ejohnston.png differ
diff --git a/website/static/img/blog/serverless-free-tier-data-stack-with-dlt-and-dbt-core/architecture_diagram.png b/website/static/img/blog/serverless-free-tier-data-stack-with-dlt-and-dbt-core/architecture_diagram.png
new file mode 100644
index 00000000000..ad10d32c2e7
Binary files /dev/null and b/website/static/img/blog/serverless-free-tier-data-stack-with-dlt-and-dbt-core/architecture_diagram.png differ
diff --git a/website/static/img/blog/serverless-free-tier-data-stack-with-dlt-and-dbt-core/map_screenshot.png b/website/static/img/blog/serverless-free-tier-data-stack-with-dlt-and-dbt-core/map_screenshot.png
new file mode 100644
index 00000000000..da8309c2510
Binary files /dev/null and b/website/static/img/blog/serverless-free-tier-data-stack-with-dlt-and-dbt-core/map_screenshot.png differ
diff --git a/website/static/img/docs/dbt-cloud/semantic-layer/conversion-metrics-fill-null.png b/website/static/img/docs/dbt-cloud/semantic-layer/conversion-metrics-fill-null.png
new file mode 100644
index 00000000000..0fd5e206ba7
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/semantic-layer/conversion-metrics-fill-null.png differ
diff --git a/website/vercel.json b/website/vercel.json
index b662e1c2144..1e4cc2fb021 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -2,6 +2,11 @@
"cleanUrls": true,
"trailingSlash": false,
"redirects": [
+ {
+ "source": "/reference/profiles.yml",
+ "destination": "/docs/core/connect-data-platform/profiles.yml",
+ "permanent": true
+ },
{
"source": "/docs/cloud/dbt-cloud-ide/dbt-cloud-tips",
"destination": "/docs/build/dbt-tips",