diff --git a/website/docs/docs/build/sl-getting-started.md b/website/docs/docs/build/sl-getting-started.md index d5a59c33ec2..4274fccf509 100644 --- a/website/docs/docs/build/sl-getting-started.md +++ b/website/docs/docs/build/sl-getting-started.md @@ -74,21 +74,9 @@ import SlSetUp from '/snippets/_new-sl-setup.md'; If you're encountering some issues when defining your metrics or setting up the dbt Semantic Layer, check out a list of answers to some of the questions or problems you may be experiencing. -
- How do I migrate from the legacy Semantic Layer to the new one? -
-
If you're using the legacy Semantic Layer, we highly recommend you upgrade your dbt version to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated migration guide for more info.
-
-
-
-How are you storing my data? -User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours. -
- -
-Is the dbt Semantic Layer open source? -The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow.

dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud Team or Enterprise plan.

Refer to Billing for more information. -
+import SlFaqs from '/snippets/_sl-faqs.md'; + + ## Next steps diff --git a/website/docs/docs/cloud/billing.md b/website/docs/docs/cloud/billing.md index 31b7689ceb9..b677f06ccfe 100644 --- a/website/docs/docs/cloud/billing.md +++ b/website/docs/docs/cloud/billing.md @@ -126,6 +126,8 @@ All included successful models built numbers above reflect our most current pric As an Enterprise customer, you pay annually via invoice, monthly in arrears for additional usage (if applicable), and may benefit from negotiated usage rates. Please refer to your order form or contract for your specific pricing details, or [contact the account team](https://www.getdbt.com/contact-demo) with any questions. +Enterprise plan billing information is not available in the dbt Cloud UI. Changes are handled through your dbt Labs Solutions Architect or account team manager. + ### Legacy plans Customers who purchased the dbt Cloud Team plan before August 11, 2023, remain on a legacy pricing plan as long as your account is in good standing. The legacy pricing plan is based on seats and includes unlimited models, subject to reasonable use. diff --git a/website/docs/docs/cloud/secure/vcs-privatelink.md b/website/docs/docs/cloud/secure/vcs-privatelink.md new file mode 100644 index 00000000000..13bb97dd6cd --- /dev/null +++ b/website/docs/docs/cloud/secure/vcs-privatelink.md @@ -0,0 +1,82 @@ +--- +title: "Configuring PrivateLink for self-hosted cloud version control systems (VCS)" +id: vcs-privatelink +description: "Setting up a PrivateLink connection between dbt Cloud and an organization’s cloud hosted git server" +sidebar_label: "PrivateLink for VCS" +--- + +import SetUpPages from '/snippets/_available-tiers-privatelink.md'; + + + +AWS PrivateLink provides private connectivity from dbt Cloud to your self-hosted cloud version control system (VCS) service by routing requests through your virtual private cloud (VPC). This type of connection does not require you to publicly expose an endpoint to your VCS repositories or for requests to the service to traverse the public internet, ensuring the most secure connection possible. AWS recommends PrivateLink connectivity as part of its [Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html) and details this particular pattern in the **Shared Services** section of the [AWS PrivateLink whitepaper](https://docs.aws.amazon.com/pdfs/whitepapers/latest/aws-privatelink/aws-privatelink.pdf). + +You will learn, at a high level, the resources necessary to implement this solution. Cloud environments and provisioning processes vary greatly, so information from this guide may need to be adapted to fit your requirements. + +## PrivateLink connection overview + + + +### Required resources for creating a connection + +Creating an Interface VPC PrivateLink connection requires creating multiple AWS resources in your AWS account(s) and private network containing the self-hosted VCS instance. You are responsible for provisioning and maintaining these resources. Once provisioned, connection information and permissions are shared with dbt Labs to complete the connection, allowing for direct VPC to VPC private connectivity. + +This approach is distinct from and does not require you to implement VPC peering between your AWS account(s) and dbt Cloud. + +You need these resource to create a PrivateLink connection, which allows the dbt Cloud application to connect to your self-hosted cloud VCS. These resources can be created via the AWS Console, AWS CLI, or Infrastructure-as-Code such as [Terraform](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) or [AWS CloudFormation](https://aws.amazon.com/cloudformation/). + +- **Target Group(s)** - A [Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html) is attached to a [Listener](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html) on the NLB and is responsible for routing incoming requests to healthy targets in the group. If connecting to the VCS system over both SSH and HTTPS, two **Target Groups** will need to be created. + - **Target Type (choose most applicable):** + - **Instance/ASG:** Select existing EC2 instance(s) where the VCS system is running, or [an autoscaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html) (ASG) to automatically attach any instances launched from that ASG. + - **Application Load Balancer (ALB):** Select an ALB that already has VCS EC2 instances attached (HTTP/S traffic only). + - **IP Addresses:** Select the IP address(es) of the EC2 instances where the VCS system is installed. Keep in mind that the IP of the EC2 instance can change if the instance is relaunched for any reason. + - **Protocol/Port:** Choose one protocol and port pair per Target Group, for example: + - TG1 - SSH: TCP/22 + - TG2 - HTTPS: TCP/443 or TLS if you want to attach a certificate to decrypt TLS connections ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html)). + - **VPC:** Choose the VPC in which the VPC Endpoint Service and NLB will be created. + - **Health checks:** Targets must register as healthy in order for the NLB to forward requests. Configure a health check that’s appropriate for your service and the protocol of the Target Group ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html)). + - **Register targets:** Register the targets (see above) for the VCS service ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-register-targets.html)). _It's critical to be sure targets are healthy before attempting connection from dbt Cloud._ +- **Network Load Balancer (NLB)** - Requires creating a Listener that attaches to the newly created Target Group(s) for port `443` and/or `22`, as applicable. + - **Scheme:** Internal + - **IP address type:** IPv4 + - **Network mapping:** Choose the VPC that the VPC Endpoint Service and NLB are being deployed in, and choose subnets from at least two Availability Zones. + - **Listeners:** Create one Listener per Target Group that maps the appropriate incoming port to the corresponding Target Group ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html)). +- **Endpoint Service** - The VPC Endpoint Service is what allows for the VPC to VPC connection, routing incoming requests to the configured load balancer. + - **Load balancer type:** Network. + - **Load balancer:** Attach the NLB created in the previous step. + - **Acceptance required (recommended)**: When enabled, requires a new connection request to the VPC Endpoint Service to be accepted by the customer before connectivity is allowed ([details](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#accept-reject-connection-requests)). + + Once these resources have been provisioned, access needs to be granted for the dbt Labs AWS account to create a VPC Endpoint in our VPC. On the newly created VPC Endpoint Service, add a new [Allowed Principal](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) for the appropriate dbt Labs principal: + + - **AWS Account ID:** `arn:aws:iam:::root` (contact your dbt Labs account representative for appropriate account ID). + +### Completing the connection + +To complete the connection, dbt Labs must now provision a VPC Endpoint to connect to your VPC Endpoint Service. This requires you send the following information: + + - VPC Endpoint Service name: + + + + - **DNS configuration:** If the connection to the VCS service requires a custom domain and/or URL for TLS, a private hosted zone can be configured by the dbt Labs Infrastructure team in the dbt Cloud private network. For example: + - **Private hosted zone:** `examplecorp.com` + - **DNS record:** `github.examplecorp.com` + +### Accepting the connection request + +When you have been notified that the resources are provisioned within the dbt Cloud environment, you must accept the endpoint connection (unless the VPC Endpoint Service is set to auto-accept connection requests). Requests can be accepted through the AWS console, as seen below, or through the AWS CLI. + + + +Once you accept the endpoint connection request, you can use the PrivateLink endpoint in dbt Cloud. + +## Configure in dbt Cloud + +Once dbt confirms that the PrivateLink integration is complete, you can use it in a new or existing git configuration. +1. Select **PrivateLink Endpoint** as the connection type, and your configured integrations will appear in the dropdown menu. +2. Select the configured endpoint from the drop down list. +3. Click **Save**. + + + + \ No newline at end of file diff --git a/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md b/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md index 36146246d3a..33a038baa9b 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md +++ b/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md @@ -17,7 +17,7 @@ dbt Core v1.6 has three significant areas of focus: ## Resources - [Changelog](https://github.com/dbt-labs/dbt-core/blob/1.6.latest/CHANGELOG.md) -- [CLI Installation guide](/docs/core/installation-overview +- [dbt Core installation guide](/docs/core/installation-overview) - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) - [Release schedule](https://github.com/dbt-labs/dbt-core/issues/7481) diff --git a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md index 84e3227b4e7..13a119da9a7 100644 --- a/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md +++ b/website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md @@ -88,20 +88,9 @@ import SlSetUp from '/snippets/_new-sl-setup.md'; If you're encountering some issues when defining your metrics or setting up the dbt Semantic Layer, check out a list of answers to some of the questions or problems you may be experiencing. -
- How do I migrate from the legacy Semantic Layer to the new one? -
-
If you're using the legacy Semantic Layer, we highly recommend you upgrade your dbt version to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated migration guide for more info.
-
-
-
-How are you storing my data? -User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours. -
-
- Is the dbt Semantic Layer open source? - The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow.

dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud Team or Enterprise plan.

Refer to Billing for more information. -
+import SlFaqs from '/snippets/_sl-faqs.md'; + + ## Next steps diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md index 94f8fee007f..9aea2ab42b0 100644 --- a/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md +++ b/website/docs/docs/use-dbt-semantic-layer/sl-architecture.md @@ -14,45 +14,38 @@ The dbt Semantic Layer allows you to define metrics and use various interfaces t - - -## dbt Semantic Layer components +## Components The dbt Semantic Layer includes the following components: | Components | Information | dbt Core users | Developer plans | Team plans | Enterprise plans | License | -| --- | --- | :---: | :---: | :---: | --- | +| --- | --- | :---: | :---: | :---: | :---: | | **[MetricFlow](/docs/build/about-metricflow)** | MetricFlow in dbt allows users to centrally define their semantic models and metrics with YAML specifications. | ✅ | ✅ | ✅ | ✅ | BSL package (code is source available) | -| **MetricFlow Server**| A proprietary server that takes metric requests and generates optimized SQL for the specific data platform. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise)| -| **Semantic Layer Gateway** | A service that passes queries to the MetricFlow server and executes the SQL generated by MetricFlow against the data platform|

❌ | ❌ |✅ | ✅ | Proprietary, Cloud (Team & Enterprise) | -| **Semantic Layer APIs** | The interfaces allow users to submit metric queries using GraphQL and JDBC APIs. They also serve as the foundation for building first-class integrations with various tools. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise)| +| **dbt Semantic interfaces**| A configuration spec for defining metrics, dimensions, how they link to each other, and how to query them. The [dbt-semantic-interfaces](https://github.com/dbt-labs/dbt-semantic-interfaces) is available under Apache 2.0. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise)| +| **Service layer** | Coordinates query requests and dispatching the relevant metric query to the target query engine. This is provided through dbt Cloud and is available to all users on dbt version 1.6 or later. The service layer includes a Gateway service for executing SQL against the data platform. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise) | +| **[Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview)** | The interfaces allow users to submit metric queries using GraphQL and JDBC APIs. They also serve as the foundation for building first-class integrations with various tools. | ❌ | ❌ | ✅ | ✅ | Proprietary, Cloud (Team & Enterprise)| -## Related questions +## Feature comparison -
- How do I migrate from the legacy Semantic Layer to the new one? -
-
If you're using the legacy Semantic Layer, we highly recommend you upgrade your dbt version to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated migration guide for more info.
-
-
- -
-How are you storing my data? -User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours. -
-
- Is the dbt Semantic Layer open source? -The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow.

dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud Team or Enterprise plan.

Refer to Billing for more information. -
-
- Is there a dbt Semantic Layer discussion hub? -
-
Yes absolutely! Join the dbt Slack community and #dbt-cloud-semantic-layer slack channel for all things related to the dbt Semantic Layer. -
-
-
+The following table compares the features available in dbt Cloud and source available in dbt Core: + +| Feature | MetricFlow Source available | dbt Semantic Layer with dbt Cloud | +| ----- | :------: | :------: | +| Define metrics and semantic models in dbt using the MetricFlow spec | ✅ | ✅ | +| Generate SQL from a set of config files | ✅ | ✅ | +| Query metrics and dimensions through the command line interface (CLI) | ✅ | ✅ | +| Query dimension, entity, and metric metadata through the CLI | ✅ | ✅ | +| Query metrics and dimensions through semantic APIs (ADBC, GQL) | ❌ | ✅ | +| Connect to downstream integrations (Tableau, Hex, Mode, Google Sheets, and so on.) | ❌ | ✅ | +| Create and run Exports to save metrics queries as tables in your data platform. | ❌ | Coming soon | + +## FAQs + +import SlFaqs from '/snippets/_sl-faqs.md'; + + diff --git a/website/docs/faqs/Tests/testing-sources.md b/website/docs/faqs/Tests/testing-sources.md index 8eb769026e5..5e68b88dcbf 100644 --- a/website/docs/faqs/Tests/testing-sources.md +++ b/website/docs/faqs/Tests/testing-sources.md @@ -9,7 +9,7 @@ id: testing-sources To run tests on all sources, use the following command: ```shell -$ dbt test --select source:* + dbt test --select "source:*" ``` (You can also use the `-s` shorthand here instead of `--select`) diff --git a/website/sidebars.js b/website/sidebars.js index 473dfe85e04..4653b028ef9 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -134,6 +134,7 @@ const sidebarSettings = { "docs/cloud/secure/databricks-privatelink", "docs/cloud/secure/redshift-privatelink", "docs/cloud/secure/postgres-privatelink", + "docs/cloud/secure/vcs-privatelink", "docs/cloud/secure/ip-restrictions", ], }, // PrivateLink diff --git a/website/snippets/_sl-faqs.md b/website/snippets/_sl-faqs.md new file mode 100644 index 00000000000..5bc556ae00a --- /dev/null +++ b/website/snippets/_sl-faqs.md @@ -0,0 +1,28 @@ +- **Is the dbt Semantic Layer open source?** + - The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow. + + dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud [Team or Enterprise](https://www.getdbt.com/pricing/) plan. + + Refer to [Billing](https://docs.getdbt.com/docs/cloud/billing) for more information. + +- **How can open-source users use the dbt Semantic Layer?** + - The dbt Semantic Layer requires the use of the dbt Cloud-provided service for coordinating query requests. Open source users who don’t use dbt Cloud can currently work around the lack of a service layer. They can do this by running `mf query --explain` in the command line. This command generates SQL code, which they can then use in their current systems for running and managing queries. + + As we refine MetricFlow’s API layers, some users may find it easier to set up their own custom service layers for managing query requests. This is not currently recommended, as the API boundaries around MetricFlow are not sufficiently well-defined for broad-based community use + +- **Can I reference MetricFlow queries inside dbt models?** + - dbt relies on Jinja macros to compile SQL, while MetricFlow is Python-based and does direct SQL rendering targeting at a specific dialect. MetricFlow does not support pass-through rendering of Jinja macros, so we can’t easily reference MetricFlow queries inside of dbt models. + + Beyond the technical challenges that could be overcome, we see Metrics as the leaf node of your DAG, and a place for users to consume metrics. If you need to do additional transformation on top of a metric, this is usually a sign that there is more modeling that needs to be done. + +- **Can I create tables in my data platform using MetricFlow?** + - You can use the upcoming feature, Exports, which will allow you to create a [pre-defined](/docs/build/saved-queries) MetricFlow query as a table in your data platform. This feature will be available to dbt Cloud customers only. This is because MetricFlow is primarily for query rendering while dispatching the relevant query and performing any DDL is the domain of the service layer on top of MetricFlow. + +- **How do I migrate from the legacy Semantic Layer to the new one?** + - If you're using the legacy Semantic Layer, we highly recommend you [upgrade your dbt version](/docs/dbt-versions/upgrade-core-in-cloud) to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated [migration guide](/guides/sl-migration) for more info. + +- **How are you storing my data?** + - User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours. + +- **Is there a dbt Semantic Layer discussion hub?** + - Yes absolutely! Join the [dbt Slack community](https://getdbt.slack.com) and [#dbt-cloud-semantic-layer slack channel](https://getdbt.slack.com/archives/C046L0VTVR6) for all things related to the dbt Semantic Layer. diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/accept-request.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/accept-request.png new file mode 100644 index 00000000000..18c38ce2552 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/accept-request.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/privatelink-vcs-architecture.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/privatelink-vcs-architecture.png new file mode 100644 index 00000000000..d4e87172988 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/privatelink-vcs-architecture.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-existing.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-existing.png new file mode 100644 index 00000000000..e1016e6c26c Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-existing.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-new.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-new.png new file mode 100644 index 00000000000..235f2e3b68c Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-new.png differ diff --git a/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vpc-endpoint-service-name.png b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vpc-endpoint-service-name.png new file mode 100644 index 00000000000..9dc5e070cdd Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vpc-endpoint-service-name.png differ diff --git a/website/static/img/docs/dbt-cloud/faq-account-settings-enterprise.jpg b/website/static/img/docs/dbt-cloud/faq-account-settings-enterprise.jpg index e88845b2843..10bff624f49 100644 Binary files a/website/static/img/docs/dbt-cloud/faq-account-settings-enterprise.jpg and b/website/static/img/docs/dbt-cloud/faq-account-settings-enterprise.jpg differ