Skip to content

Commit

Permalink
Merge branch 'current' into patch-4
Browse files Browse the repository at this point in the history
  • Loading branch information
mirnawong1 authored Dec 4, 2023
2 parents 1d6bf49 + 49de6f2 commit a0f22ea
Show file tree
Hide file tree
Showing 14 changed files with 143 additions and 60 deletions.
18 changes: 3 additions & 15 deletions website/docs/docs/build/sl-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,21 +74,9 @@ import SlSetUp from '/snippets/_new-sl-setup.md';

If you're encountering some issues when defining your metrics or setting up the dbt Semantic Layer, check out a list of answers to some of the questions or problems you may be experiencing.

<details>
<summary>How do I migrate from the legacy Semantic Layer to the new one?</summary>
<div>
<div>If you're using the legacy Semantic Layer, we highly recommend you <a href="https://docs.getdbt.com/docs/dbt-versions/upgrade-core-in-cloud">upgrade your dbt version </a> to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated <a href="https://docs.getdbt.com/guides/sl-migration"> migration guide</a> for more info.</div>
</div>
</details>
<details>
<summary>How are you storing my data?</summary>
User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
</details>

<details>
<summary>Is the dbt Semantic Layer open source?</summary>
The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow. <br /><br />dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud <a href="https://www.getdbt.com/pricing/">Team or Enterprise</a> plan. <br /><br />Refer to <a href="https://docs.getdbt.com/docs/cloud/billing">Billing</a> for more information.
</details>
import SlFaqs from '/snippets/_sl-faqs.md';

<SlFaqs/>


## Next steps
Expand Down
2 changes: 2 additions & 0 deletions website/docs/docs/cloud/billing.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,8 @@ All included successful models built numbers above reflect our most current pric

As an Enterprise customer, you pay annually via invoice, monthly in arrears for additional usage (if applicable), and may benefit from negotiated usage rates. Please refer to your order form or contract for your specific pricing details, or [contact the account team](https://www.getdbt.com/contact-demo) with any questions.

Enterprise plan billing information is not available in the dbt Cloud UI. Changes are handled through your dbt Labs Solutions Architect or account team manager.

### Legacy plans

Customers who purchased the dbt Cloud Team plan before August 11, 2023, remain on a legacy pricing plan as long as your account is in good standing. The legacy pricing plan is based on seats and includes unlimited models, subject to reasonable use.
Expand Down
82 changes: 82 additions & 0 deletions website/docs/docs/cloud/secure/vcs-privatelink.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
---
title: "Configuring PrivateLink for self-hosted cloud version control systems (VCS)"
id: vcs-privatelink
description: "Setting up a PrivateLink connection between dbt Cloud and an organization’s cloud hosted git server"
sidebar_label: "PrivateLink for VCS"
---

import SetUpPages from '/snippets/_available-tiers-privatelink.md';

<SetUpPages features={'/snippets/_available-tiers-privatelink.md'}/>

AWS PrivateLink provides private connectivity from dbt Cloud to your self-hosted cloud version control system (VCS) service by routing requests through your virtual private cloud (VPC). This type of connection does not require you to publicly expose an endpoint to your VCS repositories or for requests to the service to traverse the public internet, ensuring the most secure connection possible. AWS recommends PrivateLink connectivity as part of its [Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html) and details this particular pattern in the **Shared Services** section of the [AWS PrivateLink whitepaper](https://docs.aws.amazon.com/pdfs/whitepapers/latest/aws-privatelink/aws-privatelink.pdf).

You will learn, at a high level, the resources necessary to implement this solution. Cloud environments and provisioning processes vary greatly, so information from this guide may need to be adapted to fit your requirements.

## PrivateLink connection overview

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/privatelink-vcs-architecture.png" width="80%" title="High level overview of the dbt Cloud and AWS PrivateLink for VCS architecture" />

### Required resources for creating a connection

Creating an Interface VPC PrivateLink connection requires creating multiple AWS resources in your AWS account(s) and private network containing the self-hosted VCS instance. You are responsible for provisioning and maintaining these resources. Once provisioned, connection information and permissions are shared with dbt Labs to complete the connection, allowing for direct VPC to VPC private connectivity.

This approach is distinct from and does not require you to implement VPC peering between your AWS account(s) and dbt Cloud.

You need these resource to create a PrivateLink connection, which allows the dbt Cloud application to connect to your self-hosted cloud VCS. These resources can be created via the AWS Console, AWS CLI, or Infrastructure-as-Code such as [Terraform](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) or [AWS CloudFormation](https://aws.amazon.com/cloudformation/).

- **Target Group(s)** - A [Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html) is attached to a [Listener](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html) on the NLB and is responsible for routing incoming requests to healthy targets in the group. If connecting to the VCS system over both SSH and HTTPS, two **Target Groups** will need to be created.
- **Target Type (choose most applicable):**
- **Instance/ASG:** Select existing EC2 instance(s) where the VCS system is running, or [an autoscaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html) (ASG) to automatically attach any instances launched from that ASG.
- **Application Load Balancer (ALB):** Select an ALB that already has VCS EC2 instances attached (HTTP/S traffic only).
- **IP Addresses:** Select the IP address(es) of the EC2 instances where the VCS system is installed. Keep in mind that the IP of the EC2 instance can change if the instance is relaunched for any reason.
- **Protocol/Port:** Choose one protocol and port pair per Target Group, for example:
- TG1 - SSH: TCP/22
- TG2 - HTTPS: TCP/443 or TLS if you want to attach a certificate to decrypt TLS connections ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html)).
- **VPC:** Choose the VPC in which the VPC Endpoint Service and NLB will be created.
- **Health checks:** Targets must register as healthy in order for the NLB to forward requests. Configure a health check that’s appropriate for your service and the protocol of the Target Group ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html)).
- **Register targets:** Register the targets (see above) for the VCS service ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-register-targets.html)). _It's critical to be sure targets are healthy before attempting connection from dbt Cloud._
- **Network Load Balancer (NLB)** - Requires creating a Listener that attaches to the newly created Target Group(s) for port `443` and/or `22`, as applicable.
- **Scheme:** Internal
- **IP address type:** IPv4
- **Network mapping:** Choose the VPC that the VPC Endpoint Service and NLB are being deployed in, and choose subnets from at least two Availability Zones.
- **Listeners:** Create one Listener per Target Group that maps the appropriate incoming port to the corresponding Target Group ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html)).
- **Endpoint Service** - The VPC Endpoint Service is what allows for the VPC to VPC connection, routing incoming requests to the configured load balancer.
- **Load balancer type:** Network.
- **Load balancer:** Attach the NLB created in the previous step.
- **Acceptance required (recommended)**: When enabled, requires a new connection request to the VPC Endpoint Service to be accepted by the customer before connectivity is allowed ([details](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#accept-reject-connection-requests)).

Once these resources have been provisioned, access needs to be granted for the dbt Labs AWS account to create a VPC Endpoint in our VPC. On the newly created VPC Endpoint Service, add a new [Allowed Principal](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) for the appropriate dbt Labs principal:

- **AWS Account ID:** `arn:aws:iam::<account id>:root` (contact your dbt Labs account representative for appropriate account ID).

### Completing the connection

To complete the connection, dbt Labs must now provision a VPC Endpoint to connect to your VPC Endpoint Service. This requires you send the following information:

- VPC Endpoint Service name:

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vpc-endpoint-service-name.png" width="80%" title="Location of the VPC Endpoint Service name in the AWS console" />

- **DNS configuration:** If the connection to the VCS service requires a custom domain and/or URL for TLS, a private hosted zone can be configured by the dbt Labs Infrastructure team in the dbt Cloud private network. For example:
- **Private hosted zone:** `examplecorp.com`
- **DNS record:** `github.examplecorp.com`

### Accepting the connection request

When you have been notified that the resources are provisioned within the dbt Cloud environment, you must accept the endpoint connection (unless the VPC Endpoint Service is set to auto-accept connection requests). Requests can be accepted through the AWS console, as seen below, or through the AWS CLI.

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/accept-request.png" width="80%" title="Accept the connection request" />

Once you accept the endpoint connection request, you can use the PrivateLink endpoint in dbt Cloud.

## Configure in dbt Cloud

Once dbt confirms that the PrivateLink integration is complete, you can use it in a new or existing git configuration.
1. Select **PrivateLink Endpoint** as the connection type, and your configured integrations will appear in the dropdown menu.
2. Select the configured endpoint from the drop down list.
3. Click **Save**.

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-new.png" width="80%" title="Configuring a new git integration with PrivateLink" />

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-existing.png" width="80%" title="Editing an existing git integration with PrivateLink" />
2 changes: 1 addition & 1 deletion website/docs/docs/community-adapters.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ Community adapters are adapter plugins contributed and maintained by members of
| [TiDB](/docs/core/connect-data-platform/tidb-setup) | [Firebolt](/docs/core/connect-data-platform/firebolt-setup) | [MindsDB](/docs/core/connect-data-platform/mindsdb-setup)
| [Vertica](/docs/core/connect-data-platform/vertica-setup) | [AWS Glue](/docs/core/connect-data-platform/glue-setup) | [MySQL](/docs/core/connect-data-platform/mysql-setup) |
| [Upsolver](/docs/core/connect-data-platform/upsolver-setup) | [Databend Cloud](/docs/core/connect-data-platform/databend-setup) | [fal - Python models](/docs/core/connect-data-platform/fal-setup) |

| [TimescaleDB](https://dbt-timescaledb.debruyn.dev/) | | |
17 changes: 3 additions & 14 deletions website/docs/docs/use-dbt-semantic-layer/quickstart-sl.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,20 +88,9 @@ import SlSetUp from '/snippets/_new-sl-setup.md';

If you're encountering some issues when defining your metrics or setting up the dbt Semantic Layer, check out a list of answers to some of the questions or problems you may be experiencing.

<details>
<summary>How do I migrate from the legacy Semantic Layer to the new one?</summary>
<div>
<div>If you're using the legacy Semantic Layer, we highly recommend you <a href="https://docs.getdbt.com/docs/dbt-versions/upgrade-core-in-cloud">upgrade your dbt version </a> to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated <a href="https://docs.getdbt.com/guides/sl-migration"> migration guide</a> for more info.</div>
</div>
</details>
<details>
<summary>How are you storing my data?</summary>
User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
</details>
<details>
<summary>Is the dbt Semantic Layer open source?</summary>
The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow. <br /><br />dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud <a href="https://www.getdbt.com/pricing/">Team or Enterprise</a> plan. <br /><br />Refer to <a href="https://docs.getdbt.com/docs/cloud/billing">Billing</a> for more information.
</details>
import SlFaqs from '/snippets/_sl-faqs.md';

<SlFaqs/>


## Next steps
Expand Down
53 changes: 23 additions & 30 deletions website/docs/docs/use-dbt-semantic-layer/sl-architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,45 +14,38 @@ The dbt Semantic Layer allows you to define metrics and use various interfaces t

<Lightbox src="/img/docs/dbt-cloud/semantic-layer/sl-architecture.jpg" width="85%" title="The diagram displays how your data flows using the dbt Semantic Layer and the variety of integration tools it supports."/>



## dbt Semantic Layer components
## Components

The dbt Semantic Layer includes the following components:


| Components | Information | dbt Core users | Developer plans | Team plans | Enterprise plans | License |
| --- | --- | :---: | :---: | :---: | --- |
| --- | --- | :---: | :---: | :---: | :---: |
| **[MetricFlow](/docs/build/about-metricflow)** | MetricFlow in dbt allows users to centrally define their semantic models and metrics with YAML specifications. ||||| BSL package (code is source available) |
| **MetricFlow Server**| A proprietary server that takes metric requests and generates optimized SQL for the specific data platform. ||||| Proprietary, Cloud (Team & Enterprise)|
| **Semantic Layer Gateway** | A service that passes queries to the MetricFlow server and executes the SQL generated by MetricFlow against the data platform| <br></br>|||| Proprietary, Cloud (Team & Enterprise) |
| **Semantic Layer APIs** | The interfaces allow users to submit metric queries using GraphQL and JDBC APIs. They also serve as the foundation for building first-class integrations with various tools. ||||| Proprietary, Cloud (Team & Enterprise)|
| **dbt Semantic interfaces**| A configuration spec for defining metrics, dimensions, how they link to each other, and how to query them. The [dbt-semantic-interfaces](https://github.com/dbt-labs/dbt-semantic-interfaces) is available under Apache 2.0. ||||| Proprietary, Cloud (Team & Enterprise)|
| **Service layer** | Coordinates query requests and dispatching the relevant metric query to the target query engine. This is provided through dbt Cloud and is available to all users on dbt version 1.6 or later. The service layer includes a Gateway service for executing SQL against the data platform. | || || Proprietary, Cloud (Team & Enterprise) |
| **[Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview)** | The interfaces allow users to submit metric queries using GraphQL and JDBC APIs. They also serve as the foundation for building first-class integrations with various tools. ||||| Proprietary, Cloud (Team & Enterprise)|


## Related questions
## Feature comparison

<details>
<summary>How do I migrate from the legacy Semantic Layer to the new one?</summary>
<div>
<div>If you're using the legacy Semantic Layer, we highly recommend you <a href="https://docs.getdbt.com/docs/dbt-versions/upgrade-core-in-cloud">upgrade your dbt version </a> to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated <a href="https://docs.getdbt.com/guides/sl-migration"> migration guide</a> for more info.</div>
</div>
</details>

<details>
<summary>How are you storing my data?</summary>
User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
</details>
<details>
<summary>Is the dbt Semantic Layer open source?</summary>
The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow. <br /><br />dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud <a href="https://www.getdbt.com/pricing/">Team or Enterprise</a> plan. <br /><br />Refer to <a href="https://docs.getdbt.com/docs/cloud/billing">Billing</a> for more information.
</details>
<details>
<summary>Is there a dbt Semantic Layer discussion hub?</summary>
<div>
<div>Yes absolutely! Join the <a href="https://getdbt.slack.com">dbt Slack community</a> and <a href="https://getdbt.slack.com/archives/C046L0VTVR6">#dbt-cloud-semantic-layer slack channel</a> for all things related to the dbt Semantic Layer.
</div>
</div>
</details>
The following table compares the features available in dbt Cloud and source available in dbt Core:

| Feature | MetricFlow Source available | dbt Semantic Layer with dbt Cloud |
| ----- | :------: | :------: |
| Define metrics and semantic models in dbt using the MetricFlow spec |||
| Generate SQL from a set of config files |||
| Query metrics and dimensions through the command line interface (CLI) |||
| Query dimension, entity, and metric metadata through the CLI |||
| Query metrics and dimensions through semantic APIs (ADBC, GQL) |||
| Connect to downstream integrations (Tableau, Hex, Mode, Google Sheets, and so on.) |||
| Create and run Exports to save metrics queries as tables in your data platform. || Coming soon |

## FAQs

import SlFaqs from '/snippets/_sl-faqs.md';

<SlFaqs/>

</VersionBlock>

Expand Down
1 change: 1 addition & 0 deletions website/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ const sidebarSettings = {
"docs/cloud/secure/databricks-privatelink",
"docs/cloud/secure/redshift-privatelink",
"docs/cloud/secure/postgres-privatelink",
"docs/cloud/secure/vcs-privatelink",
"docs/cloud/secure/ip-restrictions",
],
}, // PrivateLink
Expand Down
Loading

0 comments on commit a0f22ea

Please sign in to comment.