diff --git a/website/docs/community/resources/getting-help.md b/website/docs/community/resources/getting-help.md
index 2f30644186e..19b7c22fbdf 100644
--- a/website/docs/community/resources/getting-help.md
+++ b/website/docs/community/resources/getting-help.md
@@ -60,4 +60,4 @@ If you want to receive dbt training, check out our [dbt Learn](https://learn.get
- Billing
- Bug reports related to the web interface
-As a rule of thumb, if you are using dbt Cloud, but your problem is related to code within your dbt project, then please follow the above process rather than reaching out to support.
+As a rule of thumb, if you are using dbt Cloud, but your problem is related to code within your dbt project, then please follow the above process rather than reaching out to support. Refer to [dbt Cloud support](/docs/dbt-support) for more information.
diff --git a/website/docs/docs/build/materializations.md b/website/docs/docs/build/materializations.md
index 8846f4bb0c5..192284a31ca 100644
--- a/website/docs/docs/build/materializations.md
+++ b/website/docs/docs/build/materializations.md
@@ -14,6 +14,8 @@ pagination_next: "docs/build/incremental-models"
- ephemeral
- materialized view
+You can also configure [custom materializations](/guides/create-new-materializations?step=1) in dbt. Custom materializations are a powerful way to extend dbt's functionality to meet your specific needs.
+
## Configuring materializations
By default, dbt models are materialized as "views". Models can be configured with a different materialization by supplying the `materialized` configuration parameter as shown below.
diff --git a/website/docs/docs/build/semantic-models.md b/website/docs/docs/build/semantic-models.md
index 09f808d7a17..5c6883cdcee 100644
--- a/website/docs/docs/build/semantic-models.md
+++ b/website/docs/docs/build/semantic-models.md
@@ -43,7 +43,7 @@ semantic_models:
- name: the_name_of_the_semantic_model ## Required
description: same as always ## Optional
model: ref('some_model') ## Required
- default: ## Required
+ defaults: ## Required
agg_time_dimension: dimension_name ## Required if the model contains dimensions
entities: ## Required
- see more information in entities
diff --git a/website/docs/docs/build/sl-getting-started.md b/website/docs/docs/build/sl-getting-started.md
index d5a59c33ec2..4274fccf509 100644
--- a/website/docs/docs/build/sl-getting-started.md
+++ b/website/docs/docs/build/sl-getting-started.md
@@ -74,21 +74,9 @@ import SlSetUp from '/snippets/_new-sl-setup.md';
If you're encountering some issues when defining your metrics or setting up the dbt Semantic Layer, check out a list of answers to some of the questions or problems you may be experiencing.
-
- How do I migrate from the legacy Semantic Layer to the new one?
-
-
If you're using the legacy Semantic Layer, we highly recommend you upgrade your dbt version to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated migration guide for more info.
-
-
-
-How are you storing my data?
-User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
-
-
-
-Is the dbt Semantic Layer open source?
-The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow.
dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud Team or Enterprise plan.
Refer to Billing for more information.
-
+import SlFaqs from '/snippets/_sl-faqs.md';
+
+
## Next steps
diff --git a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md
index cc1c2531f56..7f32505d56e 100644
--- a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md
+++ b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md
@@ -11,8 +11,8 @@ dbt Cloud is [hosted](/docs/cloud/about-cloud/architecture) in multiple regions
| Region | Location | Access URL | IP addresses | Developer plan | Team plan | Enterprise plan |
|--------|----------|------------|--------------|----------------|-----------|-----------------|
-| North America multi-tenant [^1] | AWS us-east-1 (N. Virginia) | cloud.getdbt.com | 52.45.144.63 54.81.134.249 52.22.161.231 | ✅ | ✅ | ✅ |
-| North America Cell 1 [^1] | AWS us-east-1 (N.Virginia) | {account prefix}.us1.dbt.com | [Located in Account Settings](#locating-your-dbt-cloud-ip-addresses) | ❌ | ❌ | ✅ |
+| North America multi-tenant [^1] | AWS us-east-1 (N. Virginia) | cloud.getdbt.com | 52.45.144.63 54.81.134.249 52.22.161.231 52.3.77.232 3.214.191.130 34.233.79.135 | ✅ | ✅ | ✅ |
+| North America Cell 1 [^1] | AWS us-east-1 (N. Virginia) | {account prefix}.us1.dbt.com | 52.45.144.63 54.81.134.249 52.22.161.231 52.3.77.232 3.214.191.130 34.233.79.135 | ❌ | ❌ | ✅ |
| EMEA [^1] | AWS eu-central-1 (Frankfurt) | emea.dbt.com | 3.123.45.39 3.126.140.248 3.72.153.148 | ❌ | ❌ | ✅ |
| APAC [^1] | AWS ap-southeast-2 (Sydney)| au.dbt.com | 52.65.89.235 3.106.40.33 13.239.155.206 | ❌ | ❌ | ✅ |
| Virtual Private dbt or Single tenant | Customized | Customized | Ask [Support](/community/resources/getting-help#dbt-cloud-support) for your IPs | ❌ | ❌ | ✅ |
diff --git a/website/docs/docs/cloud/billing.md b/website/docs/docs/cloud/billing.md
index 31b7689ceb9..b677f06ccfe 100644
--- a/website/docs/docs/cloud/billing.md
+++ b/website/docs/docs/cloud/billing.md
@@ -126,6 +126,8 @@ All included successful models built numbers above reflect our most current pric
As an Enterprise customer, you pay annually via invoice, monthly in arrears for additional usage (if applicable), and may benefit from negotiated usage rates. Please refer to your order form or contract for your specific pricing details, or [contact the account team](https://www.getdbt.com/contact-demo) with any questions.
+Enterprise plan billing information is not available in the dbt Cloud UI. Changes are handled through your dbt Labs Solutions Architect or account team manager.
+
### Legacy plans
Customers who purchased the dbt Cloud Team plan before August 11, 2023, remain on a legacy pricing plan as long as your account is in good standing. The legacy pricing plan is based on seats and includes unlimited models, subject to reasonable use.
diff --git a/website/docs/docs/cloud/secure/about-privatelink.md b/website/docs/docs/cloud/secure/about-privatelink.md
index b31e4c08a26..2134ab25cfe 100644
--- a/website/docs/docs/cloud/secure/about-privatelink.md
+++ b/website/docs/docs/cloud/secure/about-privatelink.md
@@ -23,3 +23,4 @@ dbt Cloud supports the following data platforms for use with the PrivateLink fea
- [Databricks](/docs/cloud/secure/databricks-privatelink)
- [Redshift](/docs/cloud/secure/redshift-privatelink)
- [Postgres](/docs/cloud/secure/postgres-privatelink)
+- [VCS](/docs/cloud/secure/vcs-privatelink)
diff --git a/website/docs/docs/cloud/secure/vcs-privatelink.md b/website/docs/docs/cloud/secure/vcs-privatelink.md
new file mode 100644
index 00000000000..13bb97dd6cd
--- /dev/null
+++ b/website/docs/docs/cloud/secure/vcs-privatelink.md
@@ -0,0 +1,82 @@
+---
+title: "Configuring PrivateLink for self-hosted cloud version control systems (VCS)"
+id: vcs-privatelink
+description: "Setting up a PrivateLink connection between dbt Cloud and an organization’s cloud hosted git server"
+sidebar_label: "PrivateLink for VCS"
+---
+
+import SetUpPages from '/snippets/_available-tiers-privatelink.md';
+
+
+
+AWS PrivateLink provides private connectivity from dbt Cloud to your self-hosted cloud version control system (VCS) service by routing requests through your virtual private cloud (VPC). This type of connection does not require you to publicly expose an endpoint to your VCS repositories or for requests to the service to traverse the public internet, ensuring the most secure connection possible. AWS recommends PrivateLink connectivity as part of its [Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html) and details this particular pattern in the **Shared Services** section of the [AWS PrivateLink whitepaper](https://docs.aws.amazon.com/pdfs/whitepapers/latest/aws-privatelink/aws-privatelink.pdf).
+
+You will learn, at a high level, the resources necessary to implement this solution. Cloud environments and provisioning processes vary greatly, so information from this guide may need to be adapted to fit your requirements.
+
+## PrivateLink connection overview
+
+
+
+### Required resources for creating a connection
+
+Creating an Interface VPC PrivateLink connection requires creating multiple AWS resources in your AWS account(s) and private network containing the self-hosted VCS instance. You are responsible for provisioning and maintaining these resources. Once provisioned, connection information and permissions are shared with dbt Labs to complete the connection, allowing for direct VPC to VPC private connectivity.
+
+This approach is distinct from and does not require you to implement VPC peering between your AWS account(s) and dbt Cloud.
+
+You need these resource to create a PrivateLink connection, which allows the dbt Cloud application to connect to your self-hosted cloud VCS. These resources can be created via the AWS Console, AWS CLI, or Infrastructure-as-Code such as [Terraform](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) or [AWS CloudFormation](https://aws.amazon.com/cloudformation/).
+
+- **Target Group(s)** - A [Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html) is attached to a [Listener](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html) on the NLB and is responsible for routing incoming requests to healthy targets in the group. If connecting to the VCS system over both SSH and HTTPS, two **Target Groups** will need to be created.
+ - **Target Type (choose most applicable):**
+ - **Instance/ASG:** Select existing EC2 instance(s) where the VCS system is running, or [an autoscaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html) (ASG) to automatically attach any instances launched from that ASG.
+ - **Application Load Balancer (ALB):** Select an ALB that already has VCS EC2 instances attached (HTTP/S traffic only).
+ - **IP Addresses:** Select the IP address(es) of the EC2 instances where the VCS system is installed. Keep in mind that the IP of the EC2 instance can change if the instance is relaunched for any reason.
+ - **Protocol/Port:** Choose one protocol and port pair per Target Group, for example:
+ - TG1 - SSH: TCP/22
+ - TG2 - HTTPS: TCP/443 or TLS if you want to attach a certificate to decrypt TLS connections ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html)).
+ - **VPC:** Choose the VPC in which the VPC Endpoint Service and NLB will be created.
+ - **Health checks:** Targets must register as healthy in order for the NLB to forward requests. Configure a health check that’s appropriate for your service and the protocol of the Target Group ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html)).
+ - **Register targets:** Register the targets (see above) for the VCS service ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-register-targets.html)). _It's critical to be sure targets are healthy before attempting connection from dbt Cloud._
+- **Network Load Balancer (NLB)** - Requires creating a Listener that attaches to the newly created Target Group(s) for port `443` and/or `22`, as applicable.
+ - **Scheme:** Internal
+ - **IP address type:** IPv4
+ - **Network mapping:** Choose the VPC that the VPC Endpoint Service and NLB are being deployed in, and choose subnets from at least two Availability Zones.
+ - **Listeners:** Create one Listener per Target Group that maps the appropriate incoming port to the corresponding Target Group ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html)).
+- **Endpoint Service** - The VPC Endpoint Service is what allows for the VPC to VPC connection, routing incoming requests to the configured load balancer.
+ - **Load balancer type:** Network.
+ - **Load balancer:** Attach the NLB created in the previous step.
+ - **Acceptance required (recommended)**: When enabled, requires a new connection request to the VPC Endpoint Service to be accepted by the customer before connectivity is allowed ([details](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#accept-reject-connection-requests)).
+
+ Once these resources have been provisioned, access needs to be granted for the dbt Labs AWS account to create a VPC Endpoint in our VPC. On the newly created VPC Endpoint Service, add a new [Allowed Principal](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) for the appropriate dbt Labs principal:
+
+ - **AWS Account ID:** `arn:aws:iam:::root` (contact your dbt Labs account representative for appropriate account ID).
+
+### Completing the connection
+
+To complete the connection, dbt Labs must now provision a VPC Endpoint to connect to your VPC Endpoint Service. This requires you send the following information:
+
+ - VPC Endpoint Service name:
+
+
+
+ - **DNS configuration:** If the connection to the VCS service requires a custom domain and/or URL for TLS, a private hosted zone can be configured by the dbt Labs Infrastructure team in the dbt Cloud private network. For example:
+ - **Private hosted zone:** `examplecorp.com`
+ - **DNS record:** `github.examplecorp.com`
+
+### Accepting the connection request
+
+When you have been notified that the resources are provisioned within the dbt Cloud environment, you must accept the endpoint connection (unless the VPC Endpoint Service is set to auto-accept connection requests). Requests can be accepted through the AWS console, as seen below, or through the AWS CLI.
+
+
+
+Once you accept the endpoint connection request, you can use the PrivateLink endpoint in dbt Cloud.
+
+## Configure in dbt Cloud
+
+Once dbt confirms that the PrivateLink integration is complete, you can use it in a new or existing git configuration.
+1. Select **PrivateLink Endpoint** as the connection type, and your configured integrations will appear in the dropdown menu.
+2. Select the configured endpoint from the drop down list.
+3. Click **Save**.
+
+
+
+
\ No newline at end of file
diff --git a/website/docs/docs/collaborate/explore-projects.md b/website/docs/docs/collaborate/explore-projects.md
index 05326016fab..78fe6f45cc7 100644
--- a/website/docs/docs/collaborate/explore-projects.md
+++ b/website/docs/docs/collaborate/explore-projects.md
@@ -2,7 +2,7 @@
title: "Explore your dbt projects"
sidebar_label: "Explore dbt projects"
description: "Learn about dbt Explorer and how to interact with it to understand, improve, and leverage your data pipelines."
-pagination_next: "docs/collaborate/explore-multiple-projects"
+pagination_next: "docs/collaborate/model-performance"
pagination_prev: null
---
@@ -36,7 +36,7 @@ For a richer experience with dbt Explorer, you must:
- Run [dbt source freshness](/reference/commands/source#dbt-source-freshness) within a job in the environment to view source freshness data.
- Run [dbt snapshot](/reference/commands/snapshot) or [dbt build](/reference/commands/build) within a job in the environment to view snapshot details.
-Richer and more timely metadata will become available as dbt, the Discovery API, and the underlying dbt Cloud platform evolves.
+Richer and more timely metadata will become available as dbt Core, the Discovery API, and the underlying dbt Cloud platform evolves.
## Explore your project's lineage graph {#project-lineage}
@@ -46,6 +46,8 @@ If you don't see the project lineage graph immediately, click **Render Lineage**
The nodes in the lineage graph represent the project’s resources and the edges represent the relationships between the nodes. Nodes are color-coded and include iconography according to their resource type.
+By default, dbt Explorer shows the project's [applied state](/docs/dbt-cloud-apis/project-state#definition-logical-vs-applied-state-of-dbt-nodes) lineage. That is, it shows models that have been successfully built and are available to query, not just the models defined in the project.
+
To explore the lineage graphs of tests and macros, view [their resource details pages](#view-resource-details). By default, dbt Explorer excludes these resources from the full lineage graph unless a search query returns them as results.
To interact with the full lineage graph, you can:
diff --git a/website/docs/docs/collaborate/model-performance.md b/website/docs/docs/collaborate/model-performance.md
new file mode 100644
index 00000000000..7ef675b4e1e
--- /dev/null
+++ b/website/docs/docs/collaborate/model-performance.md
@@ -0,0 +1,41 @@
+---
+title: "Model performance"
+sidebar_label: "Model performance"
+description: "Learn about ."
+---
+
+dbt Explorer provides metadata on dbt Cloud runs for in-depth model performance and quality analysis. This feature assists in reducing infrastructure costs and saving time for data teams by highlighting where to fine-tune projects and deployments — such as model refactoring or job configuration adjustments.
+
+
+
+:::tip Beta
+
+The model performance beta feature is now available in dbt Explorer! Check it out!
+:::
+
+## The Performance overview page
+
+You can pinpoint areas for performance enhancement by using the Performance overview page. This page presents a comprehensive analysis across all project models and displays the longest-running models, those most frequently executed, and the ones with the highest failure rates during runs/tests. Data can be segmented by environment and job type which can offer insights into:
+
+- Most executed models (total count).
+- Models with the longest execution time (average duration).
+- Models with the most failures, detailing run failures (percentage and count) and test failures (percentage and count).
+
+Each data point links to individual models in Explorer.
+
+
+
+You can view historical metadata for up to the past three months. Select the time horizon using the filter, which defaults to a two-week lookback.
+
+
+
+## The Model performance tab
+
+You can view trends in execution times, counts, and failures by using the Model performance tab for historical performance analysis. Daily execution data includes:
+
+- Average model execution time.
+- Model execution counts, including failures/errors (total sum).
+
+Clicking on a data point reveals a table listing all job runs for that day, with each row providing a direct link to the details of a specific run.
+
+
\ No newline at end of file
diff --git a/website/docs/docs/collaborate/project-recommendations.md b/website/docs/docs/collaborate/project-recommendations.md
new file mode 100644
index 00000000000..e6263a875fc
--- /dev/null
+++ b/website/docs/docs/collaborate/project-recommendations.md
@@ -0,0 +1,50 @@
+---
+title: "Project recommendations"
+sidebar_label: "Project recommendations"
+description: "dbt Explorer provides recommendations that you can take to improve the quality of your dbt project."
+---
+
+:::tip Beta
+
+The project recommendations beta feature is now available in dbt Explorer! Check it out!
+
+:::
+
+dbt Explorer provides recommendations about your project from the `dbt_project_evaluator` [package](https://hub.getdbt.com/dbt-labs/dbt_project_evaluator/latest/) using metadata from the Discovery API.
+
+Explorer also offers a global view, showing all the recommendations across the project for easy sorting and summarizing.
+
+These recommendations provide insight into how you can build a more well documented, well tested, and well built project, leading to less confusion and more trust.
+
+The Recommendations overview page includes two top-level metrics measuring the test and documentation coverage of the models in your project.
+
+- **Model test coverage** — The percent of models in your project (models not from a package or imported via dbt Mesh) with at least one dbt test configured on them.
+- **Model documentation coverage** — The percent of models in your project (models not from a package or imported via dbt Mesh) with a description.
+
+
+
+## List of rules
+
+| Category | Name | Description | Package Docs Link |
+| --- | --- | --- | --- |
+| Modeling | Direct Join to Source | Model that joins both a model and source, indicating a missing staging model | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/modeling/#direct-join-to-source) |
+| Modeling | Duplicate Sources | More than one source node corresponds to the same data warehouse relation | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/modeling/#duplicate-sources) |
+| Modeling | Multiple Sources Joined | Models with more than one source parent, indicating lack of staging models | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/modeling/#multiple-sources-joined) |
+| Modeling | Root Model | Models with no parents, indicating potential hardcoded references and need for sources | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/modeling/#root-models) |
+| Modeling | Source Fanout | Sources with more than one model child, indicating a need for staging models | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/modeling/#source-fanout) |
+| Modeling | Unused Source | Sources that are not referenced by any resource | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/modeling/#unused-sources) |
+| Performance | Exposure Dependent on View | Exposures with at least one model parent materialized as a view, indicating potential query performance issues | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/performance/#exposure-parents-materializations) |
+| Testing | Missing Primary Key Test | Models with insufficient testing on the grain of the model. | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/testing/#missing-primary-key-tests) |
+| Documentation | Undocumented Models | Models without a model-level description | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/documentation/#undocumented-models) |
+| Documentation | Undocumented Source | Sources (collections of source tables) without descriptions | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/documentation/#undocumented-sources) |
+| Documentation | Undocumented Source Tables | Source tables without descriptions | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/documentation/#undocumented-source-tables) |
+| Governance | Public Model Missing Contract | Models with public access that do not have a model contract to ensure the data types | [GitHub](https://dbt-labs.github.io/dbt-project-evaluator/0.8/rules/governance/#public-models-without-contracts) |
+
+
+## The Recommendations tab
+
+Models, sources and exposures each also have a Recommendations tab on their resource details page, with the specific recommendations that correspond to that resource:
+
+
+
+
diff --git a/website/docs/docs/community-adapters.md b/website/docs/docs/community-adapters.md
index 444ea0e04b4..d1e63f03128 100644
--- a/website/docs/docs/community-adapters.md
+++ b/website/docs/docs/community-adapters.md
@@ -17,4 +17,4 @@ Community adapters are adapter plugins contributed and maintained by members of
| [TiDB](/docs/core/connect-data-platform/tidb-setup) | [Firebolt](/docs/core/connect-data-platform/firebolt-setup) | [MindsDB](/docs/core/connect-data-platform/mindsdb-setup)
| [Vertica](/docs/core/connect-data-platform/vertica-setup) | [AWS Glue](/docs/core/connect-data-platform/glue-setup) | [MySQL](/docs/core/connect-data-platform/mysql-setup) |
| [Upsolver](/docs/core/connect-data-platform/upsolver-setup) | [Databend Cloud](/docs/core/connect-data-platform/databend-setup) | [fal - Python models](/docs/core/connect-data-platform/fal-setup) |
-
+| [TimescaleDB](https://dbt-timescaledb.debruyn.dev/) | | |
diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
index 931666dd10c..aba309566f8 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md
@@ -352,6 +352,8 @@ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
## FAQs
+
+
- **Why do some dimensions use different syntax, like `metric_time` versus `[Dimension('metric_time')`?**
When you select a dimension on its own, such as `metric_time` you can use the shorthand method which doesn't need the “Dimension” syntax. However, when you perform operations on the dimension, such as adding granularity, the object syntax `[Dimension('metric_time')` is required.
diff --git a/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md b/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md
index 36146246d3a..33a038baa9b 100644
--- a/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md
+++ b/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md
@@ -17,7 +17,7 @@ dbt Core v1.6 has three significant areas of focus:
## Resources
- [Changelog](https://github.com/dbt-labs/dbt-core/blob/1.6.latest/CHANGELOG.md)
-- [CLI Installation guide](/docs/core/installation-overview
+- [dbt Core installation guide](/docs/core/installation-overview)
- [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud)
- [Release schedule](https://github.com/dbt-labs/dbt-core/issues/7481)
diff --git a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
index 4f4621fa860..be02fedb230 100644
--- a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
+++ b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
@@ -33,6 +33,7 @@ import AvailIntegrations from '/snippets/_sl-partner-links.md';
- {frontMatter.meta.api_name} to learn how to integrate and query your metrics in downstream tools.
- [dbt Semantic Layer API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata)
- [Hex dbt Semantic Layer cells](https://learn.hex.tech/docs/logic-cell-types/transform-cells/dbt-metrics-cells) to set up SQL cells in Hex.
+- [Resolve 'Failed APN'](/faqs/Troubleshooting/sl-alpn-error) error when connecting to the dbt Semantic Layer.
diff --git a/website/docs/docs/use-dbt-semantic-layer/gsheets.md b/website/docs/docs/use-dbt-semantic-layer/gsheets.md
index cb9f4014803..d7525fa7b26 100644
--- a/website/docs/docs/use-dbt-semantic-layer/gsheets.md
+++ b/website/docs/docs/use-dbt-semantic-layer/gsheets.md
@@ -17,6 +17,8 @@ The dbt Semantic Layer offers a seamless integration with Google Sheets through
- You have a Google account with access to Google Sheets.
- You can install Google add-ons.
- You have a dbt Cloud Environment ID and a [service token](/docs/dbt-cloud-apis/service-tokens) to authenticate with from a dbt Cloud account.
+- You must have a dbt Cloud Team or Enterprise [account](https://www.getdbt.com/pricing). Suitable for both Multi-tenant and Single-tenant deployment.
+ - Single-tenant accounts should contact their account representative for necessary setup and enablement.
## Installing the add-on
@@ -54,10 +56,9 @@ To use the filter functionality, choose the [dimension](docs/build/dimensions) y
- For categorical dimensiosn, type in the dimension value you want to filter by (no quotes needed) and press enter.
- Continue adding additional filters as needed with AND and OR. If it's a time dimension, choose the operator and select from the calendar.
-
-
**Limited Use Policy Disclosure**
The dbt Semantic Layer for Sheet's use and transfer to any other app of information received from Google APIs will adhere to [Google API Services User Data Policy](https://developers.google.com/terms/api-services-user-data-policy), including the Limited Use requirements.
-
+## FAQs
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
index 84e3227b4e7..62437f4ecd6 100644
--- a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
@@ -26,7 +26,7 @@ MetricFlow, a powerful component of the dbt Semantic Layer, simplifies the creat
Use this guide to fully experience the power of the universal dbt Semantic Layer. Here are the following steps you'll take:
- [Create a semantic model](#create-a-semantic-model) in dbt Cloud using MetricFlow
-- [Define metrics](#define-metrics) in dbt Cloud using MetricFlow
+- [Define metrics](#define-metrics) in dbt using MetricFlow
- [Test and query metrics](#test-and-query-metrics) with MetricFlow
- [Run a production job](#run-a-production-job) in dbt Cloud
- [Set up dbt Semantic Layer](#setup) in dbt Cloud
@@ -88,20 +88,9 @@ import SlSetUp from '/snippets/_new-sl-setup.md';
If you're encountering some issues when defining your metrics or setting up the dbt Semantic Layer, check out a list of answers to some of the questions or problems you may be experiencing.
-
- How do I migrate from the legacy Semantic Layer to the new one?
-
-
If you're using the legacy Semantic Layer, we highly recommend you upgrade your dbt version to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated migration guide for more info.
-
-
-
-How are you storing my data?
-User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
-
-
- Is the dbt Semantic Layer open source?
- The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow.
dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud Team or Enterprise plan.
Refer to Billing for more information.
-
+import SlFaqs from '/snippets/_sl-faqs.md';
+
+
## Next steps
diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
index 75a853fcbe8..9aea2ab42b0 100644
--- a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
+++ b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
@@ -14,43 +14,38 @@ The dbt Semantic Layer allows you to define metrics and use various interfaces t
-## dbt Semantic Layer components
+## Components
The dbt Semantic Layer includes the following components:
| Components | Information | dbt Core users | Developer plans | Team plans | Enterprise plans | License |
-| --- | --- | :---: | :---: | :---: | --- |
+| --- | --- | :---: | :---: | :---: | :---: |
| **[MetricFlow](/docs/build/about-metricflow)** | MetricFlow in dbt allows users to centrally define their semantic models and metrics with YAML specifications. | ✅ | ✅ | ✅ | ✅ | BSL package (code is source available) |
-| **MetricFlow Server**| A proprietary server that takes metric requests and generates optimized SQL for the specific data platform. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise)|
-| **Semantic Layer Gateway** | A service that passes queries to the MetricFlow server and executes the SQL generated by MetricFlow against the data platform| ❌ | ❌ |✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
-| **Semantic Layer APIs** | The interfaces allow users to submit metric queries using GraphQL and JDBC APIs. They also serve as the foundation for building first-class integrations with various tools. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise)|
+| **dbt Semantic interfaces**| A configuration spec for defining metrics, dimensions, how they link to each other, and how to query them. The [dbt-semantic-interfaces](https://github.com/dbt-labs/dbt-semantic-interfaces) is available under Apache 2.0. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise)|
+| **Service layer** | Coordinates query requests and dispatching the relevant metric query to the target query engine. This is provided through dbt Cloud and is available to all users on dbt version 1.6 or later. The service layer includes a Gateway service for executing SQL against the data platform. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) |
+| **[Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview)** | The interfaces allow users to submit metric queries using GraphQL and JDBC APIs. They also serve as the foundation for building first-class integrations with various tools. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise)|
-## Related questions
+## Feature comparison
-
- How do I migrate from the legacy Semantic Layer to the new one?
-
-
If you're using the legacy Semantic Layer, we highly recommend you upgrade your dbt version to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated migration guide for more info.
-
-
-
-
-How are you storing my data?
-User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
-
-
- Is the dbt Semantic Layer open source?
-The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow.
dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud Team or Enterprise plan.
Refer to Billing for more information.
-
-
- Is there a dbt Semantic Layer discussion hub?
-
-
+The following table compares the features available in dbt Cloud and source available in dbt Core:
+
+| Feature | MetricFlow Source available | dbt Semantic Layer with dbt Cloud |
+| ----- | :------: | :------: |
+| Define metrics and semantic models in dbt using the MetricFlow spec | ✅ | ✅ |
+| Generate SQL from a set of config files | ✅ | ✅ |
+| Query metrics and dimensions through the command line interface (CLI) | ✅ | ✅ |
+| Query dimension, entity, and metric metadata through the CLI | ✅ | ✅ |
+| Query metrics and dimensions through semantic APIs (ADBC, GQL) | ❌ | ✅ |
+| Connect to downstream integrations (Tableau, Hex, Mode, Google Sheets, and so on.) | ❌ | ✅ |
+| Create and run Exports to save metrics queries as tables in your data platform. | ❌ | Coming soon |
+
+## FAQs
+
+import SlFaqs from '/snippets/_sl-faqs.md';
+
+
diff --git a/website/docs/docs/use-dbt-semantic-layer/tableau.md b/website/docs/docs/use-dbt-semantic-layer/tableau.md
index 1d283023dda..0f12a75f468 100644
--- a/website/docs/docs/use-dbt-semantic-layer/tableau.md
+++ b/website/docs/docs/use-dbt-semantic-layer/tableau.md
@@ -21,7 +21,8 @@ This integration provides a live connection to the dbt Semantic Layer through Ta
- Note that Tableau Online does not currently support custom connectors natively. If you use Tableau Online, you will only be able to access the connector in Tableau Desktop.
- Log in to Tableau Desktop (with Online or Server credentials) or a license to Tableau Server
- You need your dbt Cloud host, [Environment ID](/docs/use-dbt-semantic-layer/setup-sl#set-up-dbt-semantic-layer) and [service token](/docs/dbt-cloud-apis/service-tokens) to log in. This account should be set up with the dbt Semantic Layer.
-- You must have a dbt Cloud Team or Enterprise [account](https://www.getdbt.com/pricing) and multi-tenant [deployment](/docs/cloud/about-cloud/regions-ip-addresses). (Single-Tenant coming soon)
+- You must have a dbt Cloud Team or Enterprise [account](https://www.getdbt.com/pricing). Suitable for both Multi-tenant and Single-tenant deployment.
+ - Single-tenant accounts should contact their account representative for necessary setup and enablement.
## Installing the Connector
@@ -36,7 +37,7 @@ This integration provides a live connection to the dbt Semantic Layer through Ta
2. Install the [JDBC driver](/docs/dbt-cloud-apis/sl-jdbc) to the folder based on your operating system:
- Windows: `C:\Program Files\Tableau\Drivers`
- - Mac: `~/Library/Tableau/Drivers`
+ - Mac: `~/Library/Tableau/Drivers` or `/Library/JDBC` or `~/Library/JDBC`
- Linux: ` /opt/tableau/tableau_driver/jdbc`
3. Open Tableau Desktop or Tableau Server and find the **dbt Semantic Layer by dbt Labs** connector on the left-hand side. You may need to restart these applications for the connector to be available.
4. Connect with your Host, Environment ID, and Service Token information dbt Cloud provides during [Semantic Layer configuration](/docs/use-dbt-semantic-layer/setup-sl#:~:text=After%20saving%20it%2C%20you%27ll%20be%20provided%20with%20the%20connection%20information%20that%20allows%20you%20to%20connect%20to%20downstream%20tools).
@@ -80,3 +81,5 @@ The following Tableau features aren't supported at this time, however, the dbt S
- Filtering on a Date Part time dimension for a Cumulative metric type
- Changing your date dimension to use "Week Number"
+## FAQs
+
diff --git a/website/docs/faqs/API/_category_.yaml b/website/docs/faqs/API/_category_.yaml
new file mode 100644
index 00000000000..fac67328a7a
--- /dev/null
+++ b/website/docs/faqs/API/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'API'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: API FAQs
+customProps:
+ description: Frequently asked questions about dbt APIs
diff --git a/website/docs/faqs/Accounts/_category_.yaml b/website/docs/faqs/Accounts/_category_.yaml
new file mode 100644
index 00000000000..b8ebee5fe2a
--- /dev/null
+++ b/website/docs/faqs/Accounts/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Accounts'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Account FAQs
+customProps:
+ description: Frequently asked questions about your account in dbt
diff --git a/website/docs/faqs/Core/_category_.yaml b/website/docs/faqs/Core/_category_.yaml
new file mode 100644
index 00000000000..bac4ad4a655
--- /dev/null
+++ b/website/docs/faqs/Core/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'dbt Core'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: 'dbt Core FAQs'
+customProps:
+ description: Frequently asked questions about dbt Core
diff --git a/website/docs/faqs/Docs/_category_.yaml b/website/docs/faqs/Docs/_category_.yaml
new file mode 100644
index 00000000000..8c7925dcc15
--- /dev/null
+++ b/website/docs/faqs/Docs/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'dbt Docs'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: dbt Docs FAQs
+customProps:
+ description: Frequently asked questions about dbt Docs
diff --git a/website/docs/faqs/Environments/_category_.yaml b/website/docs/faqs/Environments/_category_.yaml
new file mode 100644
index 00000000000..8d252d2c5d3
--- /dev/null
+++ b/website/docs/faqs/Environments/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Environments'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: 'Environments FAQs'
+customProps:
+ description: Frequently asked questions about Environments in dbt
diff --git a/website/docs/faqs/Git/_category_.yaml b/website/docs/faqs/Git/_category_.yaml
new file mode 100644
index 00000000000..0d9e5ee6e91
--- /dev/null
+++ b/website/docs/faqs/Git/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Git'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Git FAQs
+customProps:
+ description: Frequently asked questions about Git and dbt
diff --git a/website/docs/faqs/Jinja/_category_.yaml b/website/docs/faqs/Jinja/_category_.yaml
new file mode 100644
index 00000000000..809ca0bb8eb
--- /dev/null
+++ b/website/docs/faqs/Jinja/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Jinja'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Jinja FAQs
+customProps:
+ description: Frequently asked questions about Jinja and dbt
diff --git a/website/docs/faqs/Models/_category_.yaml b/website/docs/faqs/Models/_category_.yaml
new file mode 100644
index 00000000000..7398058db2b
--- /dev/null
+++ b/website/docs/faqs/Models/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Models'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Models FAQs
+customProps:
+ description: Frequently asked questions about Models in dbt
diff --git a/website/docs/faqs/Project/_category_.yaml b/website/docs/faqs/Project/_category_.yaml
new file mode 100644
index 00000000000..d2f695773f8
--- /dev/null
+++ b/website/docs/faqs/Project/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Projects'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Project FAQs
+customProps:
+ description: Frequently asked questions about projects in dbt
diff --git a/website/docs/faqs/Runs/_category_.yaml b/website/docs/faqs/Runs/_category_.yaml
new file mode 100644
index 00000000000..5867a0d3710
--- /dev/null
+++ b/website/docs/faqs/Runs/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Runs'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Runs FAQs
+customProps:
+ description: Frequently asked questions about runs in dbt
diff --git a/website/docs/faqs/Seeds/_category_.yaml b/website/docs/faqs/Seeds/_category_.yaml
new file mode 100644
index 00000000000..fd2f7d3d925
--- /dev/null
+++ b/website/docs/faqs/Seeds/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Seeds'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Seeds FAQs
+customProps:
+ description: Frequently asked questions about seeds in dbt
diff --git a/website/docs/faqs/Snapshots/_category_.yaml b/website/docs/faqs/Snapshots/_category_.yaml
new file mode 100644
index 00000000000..743b508fefe
--- /dev/null
+++ b/website/docs/faqs/Snapshots/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Snapshots'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Snapshots FAQs
+customProps:
+ description: Frequently asked questions about snapshots in dbt
diff --git a/website/docs/faqs/Tests/_category_.yaml b/website/docs/faqs/Tests/_category_.yaml
new file mode 100644
index 00000000000..754b8ec267b
--- /dev/null
+++ b/website/docs/faqs/Tests/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Tests'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Tests FAQs
+customProps:
+ description: Frequently asked questions about tests in dbt
diff --git a/website/docs/faqs/Tests/testing-sources.md b/website/docs/faqs/Tests/testing-sources.md
index 8eb769026e5..5e68b88dcbf 100644
--- a/website/docs/faqs/Tests/testing-sources.md
+++ b/website/docs/faqs/Tests/testing-sources.md
@@ -9,7 +9,7 @@ id: testing-sources
To run tests on all sources, use the following command:
```shell
-$ dbt test --select source:*
+ dbt test --select "source:*"
```
(You can also use the `-s` shorthand here instead of `--select`)
diff --git a/website/docs/faqs/Troubleshooting/_category_.yaml b/website/docs/faqs/Troubleshooting/_category_.yaml
new file mode 100644
index 00000000000..14c4b49044d
--- /dev/null
+++ b/website/docs/faqs/Troubleshooting/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Troubleshooting'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Troubleshooting FAQs
+customProps:
+ description: Frequently asked questions about troubleshooting dbt
diff --git a/website/docs/faqs/Troubleshooting/sl-alpn-error.md b/website/docs/faqs/Troubleshooting/sl-alpn-error.md
new file mode 100644
index 00000000000..f588d690fac
--- /dev/null
+++ b/website/docs/faqs/Troubleshooting/sl-alpn-error.md
@@ -0,0 +1,14 @@
+---
+title: I'm receiving an `Failed ALPN` error when trying to connect to the dbt Semantic Layer.
+description: "To resolve the 'Failed ALPN' error in the dbt Semantic Layer, create a SSL interception exception for the dbt Cloud domain."
+sidebar_label: 'Use SSL exception to resolve `Failed ALPN` error'
+---
+
+If you're receiving a `Failed ALPN` error when trying to connect the dbt Semantic Layer with the various [data integration tools](/docs/use-dbt-semantic-layer/avail-sl-integrations) (such as Tableau, DBeaver, Datagrip, ADBC, or JDBC), it typically happens when connecting from a computer behind a corporate VPN or Proxy (like Zscaler or Check Point).
+
+The root cause is typically the proxy interfering with the TLS handshake as the dbt Semantic Layer uses gRPC/HTTP2 for connectivity. To resolve this:
+
+- If your proxy supports gRPC/HTTP2 but isn't configured to allow ALPN, adjust its settings accordingly to allow ALPN. Or create an exception for the dbt Cloud domain.
+- If your proxy does not support gRPC/HTTP2, add an SSL interception exception for the dbt Cloud domain in your proxy settings
+
+This should help in successfully establishing the connection without the Failed ALPN error.
diff --git a/website/docs/faqs/Warehouse/_category_.yaml b/website/docs/faqs/Warehouse/_category_.yaml
new file mode 100644
index 00000000000..4de6e2e7d5e
--- /dev/null
+++ b/website/docs/faqs/Warehouse/_category_.yaml
@@ -0,0 +1,10 @@
+# position: 2.5 # float position is supported
+label: 'Warehouse'
+collapsible: true # make the category collapsible
+collapsed: true # keep the category collapsed by default
+className: red
+link:
+ type: generated-index
+ title: Warehouse FAQs
+customProps:
+ description: Frequently asked questions about warehouses and dbt
diff --git a/website/docs/guides/create-new-materializations.md b/website/docs/guides/create-new-materializations.md
index 1ad7d202de6..af2732c0c39 100644
--- a/website/docs/guides/create-new-materializations.md
+++ b/website/docs/guides/create-new-materializations.md
@@ -7,7 +7,6 @@ hoverSnippet: Learn how to create your own materializations.
# time_to_complete: '30 minutes' commenting out until we test
icon: 'guides'
hide_table_of_contents: true
-tags: ['dbt Core']
level: 'Advanced'
recently_updated: true
---
diff --git a/website/docs/guides/sl-migration.md b/website/docs/guides/sl-migration.md
index c3cca81f68e..8ede40a6a2d 100644
--- a/website/docs/guides/sl-migration.md
+++ b/website/docs/guides/sl-migration.md
@@ -91,13 +91,11 @@ At this point, both the new semantic layer and the old semantic layer will be ru
Now that your Semantic Layer is set up, you will need to update any downstream integrations that used the legacy Semantic Layer.
-### Migration guide for Hex
+### Migration guide for Hex
-To learn more about integrating with Hex, check out their [documentation](https://learn.hex.tech/docs/connect-to-data/data-connections/dbt-integration#dbt-semantic-layer-integration) for more info. Additionally, refer to [dbt Semantic Layer cells](https://learn.hex.tech/docs/logic-cell-types/transform-cells/dbt-metrics-cells) to set up SQL cells in Hex.
+To learn more about integrating with Hex, check out their [documentation](https://learn.hex.tech/docs/connect-to-data/data-connections/dbt-integration#dbt-semantic-layer-integration) for more info. Additionally, refer to [dbt Semantic Layer cells](https://learn.hex.tech/docs/logic-cell-types/transform-cells/dbt-metrics-cells) to set up SQL cells in Hex.
-1. Set up a new connection for the Semantic Layer for your account. Something to note is that your old connection will still work. The following Loom video guides you in setting up your Semantic Layer with Hex:
-
-
+1. Set up a new connection for the dbt Semantic Layer for your account. Something to note is that your legacy connection will still work.
2. Re-create the dashboards or reports that use the legacy dbt Semantic Layer.
diff --git a/website/sidebars.js b/website/sidebars.js
index 473dfe85e04..598fffc7f0d 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -134,6 +134,7 @@ const sidebarSettings = {
"docs/cloud/secure/databricks-privatelink",
"docs/cloud/secure/redshift-privatelink",
"docs/cloud/secure/postgres-privatelink",
+ "docs/cloud/secure/vcs-privatelink",
"docs/cloud/secure/ip-restrictions",
],
}, // PrivateLink
@@ -423,6 +424,8 @@ const sidebarSettings = {
link: { type: "doc", id: "docs/collaborate/explore-projects" },
items: [
"docs/collaborate/explore-projects",
+ "docs/collaborate/model-performance",
+ "docs/collaborate/project-recommendations",
"docs/collaborate/explore-multiple-projects",
],
},
diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md
index 4e1cba64e00..6e096b83750 100644
--- a/website/snippets/_cloud-environments-info.md
+++ b/website/snippets/_cloud-environments-info.md
@@ -62,7 +62,7 @@ By default, all environments will use the default branch in your repository (usu
For more info, check out this [FAQ page on this topic](/faqs/Environments/custom-branch-settings)!
-### Extended attributes
+### Extended attributes
:::note
Extended attributes are retrieved and applied only at runtime when `profiles.yml` is requested for a specific Cloud run. Extended attributes are currently _not_ taken into consideration for Cloud-specific features such as PrivateLink or SSH Tunneling that do not rely on `profiles.yml` values.
diff --git a/website/snippets/_new-sl-setup.md b/website/snippets/_new-sl-setup.md
index 3cb6e09eb4c..18e75c3278d 100644
--- a/website/snippets/_new-sl-setup.md
+++ b/website/snippets/_new-sl-setup.md
@@ -1,6 +1,7 @@
You can set up the dbt Semantic Layer in dbt Cloud at the environment and project level. Before you begin:
-- You must have a dbt Cloud Team or Enterprise [multi-tenant](/docs/cloud/about-cloud/regions-ip-addresses) deployment. Single-tenant coming soon.
+- You must have a dbt Cloud Team or Enterprise account. Suitable for both Multi-tenant and Single-tenant deployment.
+ - Single-tenant accounts should contact their account representative for necessary setup and enablement.
- You must be part of the Owner group, and have the correct [license](/docs/cloud/manage-access/seats-and-users) and [permissions](/docs/cloud/manage-access/self-service-permissions) to configure the Semantic Layer:
* Enterprise plan — Developer license with Account Admin permissions. Or Owner with a Developer license, assigned Project Creator, Database Admin, or Admin permissions.
* Team plan — Owner with a Developer license.
diff --git a/website/snippets/_sl-connect-and-query-api.md b/website/snippets/_sl-connect-and-query-api.md
index 429f41c3bf6..f7f1d2add24 100644
--- a/website/snippets/_sl-connect-and-query-api.md
+++ b/website/snippets/_sl-connect-and-query-api.md
@@ -1,10 +1,8 @@
You can query your metrics in a JDBC-enabled tool or use existing first-class integrations with the dbt Semantic Layer.
-You must have a dbt Cloud Team or Enterprise [multi-tenant](/docs/cloud/about-cloud/regions-ip-addresses) deployment. Single-tenant coming soon.
-
+- You must have a dbt Cloud Team or Enterprise account. Suitable for both Multi-tenant and Single-tenant deployment.
+ - Single-tenant accounts should contact their account representative for necessary setup and enablement.
- To learn how to use the JDBC or GraphQL API and what tools you can query it with, refer to [dbt Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview).
-
* To authenticate, you need to [generate a service token](/docs/dbt-cloud-apis/service-tokens) with Semantic Layer Only and Metadata Only permissions.
* Refer to the [SQL query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata) to query metrics using the API.
-
- To learn more about the sophisticated integrations that connect to the dbt Semantic Layer, refer to [Available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations) for more info.
diff --git a/website/snippets/_sl-faqs.md b/website/snippets/_sl-faqs.md
new file mode 100644
index 00000000000..5bc556ae00a
--- /dev/null
+++ b/website/snippets/_sl-faqs.md
@@ -0,0 +1,28 @@
+- **Is the dbt Semantic Layer open source?**
+ - The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow.
+
+ dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) plan.
+
+ Refer to [Billing](https://docs.getdbt.com/docs/cloud/billing) for more information.
+
+- **How can open-source users use the dbt Semantic Layer?**
+ - The dbt Semantic Layer requires the use of the dbt Cloud-provided service for coordinating query requests. Open source users who don’t use dbt Cloud can currently work around the lack of a service layer. They can do this by running `mf query --explain` in the command line. This command generates SQL code, which they can then use in their current systems for running and managing queries.
+
+ As we refine MetricFlow’s API layers, some users may find it easier to set up their own custom service layers for managing query requests. This is not currently recommended, as the API boundaries around MetricFlow are not sufficiently well-defined for broad-based community use
+
+- **Can I reference MetricFlow queries inside dbt models?**
+ - dbt relies on Jinja macros to compile SQL, while MetricFlow is Python-based and does direct SQL rendering targeting at a specific dialect. MetricFlow does not support pass-through rendering of Jinja macros, so we can’t easily reference MetricFlow queries inside of dbt models.
+
+ Beyond the technical challenges that could be overcome, we see Metrics as the leaf node of your DAG, and a place for users to consume metrics. If you need to do additional transformation on top of a metric, this is usually a sign that there is more modeling that needs to be done.
+
+- **Can I create tables in my data platform using MetricFlow?**
+ - You can use the upcoming feature, Exports, which will allow you to create a [pre-defined](/docs/build/saved-queries) MetricFlow query as a table in your data platform. This feature will be available to dbt Cloud customers only. This is because MetricFlow is primarily for query rendering while dispatching the relevant query and performing any DDL is the domain of the service layer on top of MetricFlow.
+
+- **How do I migrate from the legacy Semantic Layer to the new one?**
+ - If you're using the legacy Semantic Layer, we highly recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/sl-migration) for more info.
+
+- **How are you storing my data?**
+ - User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
+
+- **Is there a dbt Semantic Layer discussion hub?**
+ - Yes absolutely! Join the [dbt Slack community](https://getdbt.slack.com) and [#dbt-cloud-semantic-layer slack channel](https://getdbt.slack.com/archives/C046L0VTVR6) for all things related to the dbt Semantic Layer.
diff --git a/website/snippets/_sl-plan-info.md b/website/snippets/_sl-plan-info.md
index 083ab2209bc..fe4e6024226 100644
--- a/website/snippets/_sl-plan-info.md
+++ b/website/snippets/_sl-plan-info.md
@@ -1,2 +1,2 @@
-To define and query metrics with the {props.product}, you must be on a {props.plan} multi-tenant plan .
+To define and query metrics with the {props.product}, you must be on a {props.plan} account. Suitable for both Multi-tenant and Single-tenant accounts. Note: Single-tenant accounts should contact their account representative for necessary setup and enablement.
diff --git a/website/snippets/_v2-sl-prerequisites.md b/website/snippets/_v2-sl-prerequisites.md
index c80db4d1c8f..6a9babcf0e0 100644
--- a/website/snippets/_v2-sl-prerequisites.md
+++ b/website/snippets/_v2-sl-prerequisites.md
@@ -1,15 +1,16 @@
-- Have a dbt Cloud Team or Enterprise [multi-tenant](/docs/cloud/about-cloud/regions-ip-addresses) deployment. Single-tenant coming soon.
-- Have both your production and development environments running dbt version 1.6 or higher. Refer to [upgrade in dbt Cloud](/docs/dbt-versions/upgrade-core-in-cloud) for more info.
+- Have a dbt Cloud Team or Enterprise account. Suitable for both Multi-tenant and Single-tenant deploymnet.
+ - Note: Single-tenant accounts should contact their account representative for necessary setup and enablement.
+- Have both your production and development environments running [dbt version 1.6 or higher](/docs/dbt-versions/upgrade-core-in-cloud).
- Use Snowflake, BigQuery, Databricks, or Redshift.
- Create a successful run in the environment where you configure the Semantic Layer.
- **Note:** Semantic Layer currently supports the Deployment environment for querying. (_development querying experience coming soon_)
- Set up the [Semantic Layer API](/docs/dbt-cloud-apis/sl-api-overview) in the integrated tool to import metric definitions.
- - To access the API and query metrics in downstream tools, you must have a dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) account. dbt Core or Developer accounts can define metrics but won't be able to dynamically query them.
+ - dbt Core or Developer accounts can define metrics but won't be able to dynamically query them.
- Understand [MetricFlow's](/docs/build/about-metricflow) key concepts, which powers the latest dbt Semantic Layer.
-- Note that SSH tunneling for [Postgres and Redshift](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb) connections, [PrivateLink](/docs/cloud/secure/about-privatelink), and [Single sign-on (SSO)](/docs/cloud/manage-access/sso-overview) isn't supported yet.
+- Note that SSH tunneling for [Postgres and Redshift](/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb) connections, [PrivateLink](/docs/cloud/secure/about-privatelink), and [Single sign-on (SSO)](/docs/cloud/manage-access/sso-overview) doesn't supported the dbt Semantic Layer yet.
diff --git a/website/src/components/communitySpotlightCard/index.js b/website/src/components/communitySpotlightCard/index.js
index 08707a93dd4..122edee8f06 100644
--- a/website/src/components/communitySpotlightCard/index.js
+++ b/website/src/components/communitySpotlightCard/index.js
@@ -1,5 +1,6 @@
import React from 'react'
import Link from '@docusaurus/Link';
+import Head from "@docusaurus/Head";
import styles from './styles.module.css';
import imageCacheWrapper from '../../../functions/image-cache-wrapper';
@@ -47,24 +48,45 @@ function CommunitySpotlightCard({ frontMatter, isSpotlightMember = false }) {
jobTitle,
companyName,
organization,
- socialLinks
+ socialLinks,
+ communityAward
} = frontMatter
- return (
-
+ // Get meta description text
+ const metaDescription = stripHtml(description)
+
+ return (
+
+ {isSpotlightMember && metaDescription ? (
+
+
+
+
+ ) : null}
+ {communityAward ? (
+