Skip to content

Commit

Permalink
Merge branch 'current' into expand-releases
Browse files Browse the repository at this point in the history
  • Loading branch information
nghi-ly authored Dec 5, 2023
2 parents df98d81 + 8fe7890 commit 96c815d
Show file tree
Hide file tree
Showing 59 changed files with 545 additions and 121 deletions.
2 changes: 1 addition & 1 deletion website/docs/community/resources/getting-help.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,4 +60,4 @@ If you want to receive dbt training, check out our [dbt Learn](https://learn.get
- Billing
- Bug reports related to the web interface

As a rule of thumb, if you are using dbt Cloud, but your problem is related to code within your dbt project, then please follow the above process rather than reaching out to support.
As a rule of thumb, if you are using dbt Cloud, but your problem is related to code within your dbt project, then please follow the above process rather than reaching out to support. Refer to [dbt Cloud support](/docs/dbt-support) for more information.
2 changes: 2 additions & 0 deletions website/docs/docs/build/materializations.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ pagination_next: "docs/build/incremental-models"
- ephemeral
- materialized view

You can also configure [custom materializations](/guides/create-new-materializations?step=1) in dbt. Custom materializations are a powerful way to extend dbt's functionality to meet your specific needs.


## Configuring materializations
By default, dbt models are materialized as "views". Models can be configured with a different materialization by supplying the `materialized` configuration parameter as shown below.
Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/semantic-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ semantic_models:
- name: the_name_of_the_semantic_model ## Required
description: same as always ## Optional
model: ref('some_model') ## Required
default: ## Required
defaults: ## Required
agg_time_dimension: dimension_name ## Required if the model contains dimensions
entities: ## Required
- see more information in entities
Expand Down
18 changes: 3 additions & 15 deletions website/docs/docs/build/sl-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,21 +74,9 @@ import SlSetUp from '/snippets/_new-sl-setup.md';

If you're encountering some issues when defining your metrics or setting up the dbt Semantic Layer, check out a list of answers to some of the questions or problems you may be experiencing.

<details>
<summary>How do I migrate from the legacy Semantic Layer to the new one?</summary>
<div>
<div>If you're using the legacy Semantic Layer, we highly recommend you <a href="https://docs.getdbt.com/docs/dbt-versions/upgrade-core-in-cloud">upgrade your dbt version </a> to dbt v1.6 or higher to use the new dbt Semantic Layer. Refer to the dedicated <a href="https://docs.getdbt.com/guides/sl-migration"> migration guide</a> for more info.</div>
</div>
</details>
<details>
<summary>How are you storing my data?</summary>
User data passes through the Semantic Layer on its way back from the warehouse. dbt Labs ensures security by authenticating through the customer's data warehouse. Currently, we don't cache data for the long term, but it might temporarily stay in the system for up to 10 minutes, usually less. In the future, we'll introduce a caching feature that allows us to cache data on our infrastructure for up to 24 hours.
</details>

<details>
<summary>Is the dbt Semantic Layer open source?</summary>
The dbt Semantic Layer is proprietary; however, some components of the dbt Semantic Layer are open source, such as dbt-core and MetricFlow. <br /><br />dbt Cloud Developer or dbt Core users can define metrics in their project, including a local dbt Core project, using the dbt Cloud IDE, dbt Cloud CLI, or dbt Core CLI. However, to experience the universal dbt Semantic Layer and access those metrics using the API or downstream tools, users must be on a dbt Cloud <a href="https://www.getdbt.com/pricing/">Team or Enterprise</a> plan. <br /><br />Refer to <a href="https://docs.getdbt.com/docs/cloud/billing">Billing</a> for more information.
</details>
import SlFaqs from '/snippets/_sl-faqs.md';

<SlFaqs/>


## Next steps
Expand Down
4 changes: 2 additions & 2 deletions website/docs/docs/cloud/about-cloud/regions-ip-addresses.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ dbt Cloud is [hosted](/docs/cloud/about-cloud/architecture) in multiple regions

| Region | Location | Access URL | IP addresses | Developer plan | Team plan | Enterprise plan |
|--------|----------|------------|--------------|----------------|-----------|-----------------|
| North America multi-tenant [^1] | AWS us-east-1 (N. Virginia) | cloud.getdbt.com | 52.45.144.63 <br /> 54.81.134.249 <br />52.22.161.231 ||||
| North America Cell 1 [^1] | AWS us-east-1 (N.Virginia) | {account prefix}.us1.dbt.com | [Located in Account Settings](#locating-your-dbt-cloud-ip-addresses) ||||
| North America multi-tenant [^1] | AWS us-east-1 (N. Virginia) | cloud.getdbt.com | 52.45.144.63 <br /> 54.81.134.249 <br />52.22.161.231 <br />52.3.77.232 <br />3.214.191.130 <br />34.233.79.135 ||||
| North America Cell 1 [^1] | AWS us-east-1 (N. Virginia) | {account prefix}.us1.dbt.com | 52.45.144.63 <br /> 54.81.134.249 <br />52.22.161.231 <br />52.3.77.232 <br />3.214.191.130 <br />34.233.79.135 ||||
| EMEA [^1] | AWS eu-central-1 (Frankfurt) | emea.dbt.com | 3.123.45.39 <br /> 3.126.140.248 <br /> 3.72.153.148 ||||
| APAC [^1] | AWS ap-southeast-2 (Sydney)| au.dbt.com | 52.65.89.235 <br /> 3.106.40.33 <br /> 13.239.155.206 <br />||||
| Virtual Private dbt or Single tenant | Customized | Customized | Ask [Support](/community/resources/getting-help#dbt-cloud-support) for your IPs ||||
Expand Down
2 changes: 2 additions & 0 deletions website/docs/docs/cloud/billing.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,8 @@ All included successful models built numbers above reflect our most current pric

As an Enterprise customer, you pay annually via invoice, monthly in arrears for additional usage (if applicable), and may benefit from negotiated usage rates. Please refer to your order form or contract for your specific pricing details, or [contact the account team](https://www.getdbt.com/contact-demo) with any questions.

Enterprise plan billing information is not available in the dbt Cloud UI. Changes are handled through your dbt Labs Solutions Architect or account team manager.

### Legacy plans

Customers who purchased the dbt Cloud Team plan before August 11, 2023, remain on a legacy pricing plan as long as your account is in good standing. The legacy pricing plan is based on seats and includes unlimited models, subject to reasonable use.
Expand Down
1 change: 1 addition & 0 deletions website/docs/docs/cloud/secure/about-privatelink.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,4 @@ dbt Cloud supports the following data platforms for use with the PrivateLink fea
- [Databricks](/docs/cloud/secure/databricks-privatelink)
- [Redshift](/docs/cloud/secure/redshift-privatelink)
- [Postgres](/docs/cloud/secure/postgres-privatelink)
- [VCS](/docs/cloud/secure/vcs-privatelink)
82 changes: 82 additions & 0 deletions website/docs/docs/cloud/secure/vcs-privatelink.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
---
title: "Configuring PrivateLink for self-hosted cloud version control systems (VCS)"
id: vcs-privatelink
description: "Setting up a PrivateLink connection between dbt Cloud and an organization’s cloud hosted git server"
sidebar_label: "PrivateLink for VCS"
---

import SetUpPages from '/snippets/_available-tiers-privatelink.md';

<SetUpPages features={'/snippets/_available-tiers-privatelink.md'}/>

AWS PrivateLink provides private connectivity from dbt Cloud to your self-hosted cloud version control system (VCS) service by routing requests through your virtual private cloud (VPC). This type of connection does not require you to publicly expose an endpoint to your VCS repositories or for requests to the service to traverse the public internet, ensuring the most secure connection possible. AWS recommends PrivateLink connectivity as part of its [Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html) and details this particular pattern in the **Shared Services** section of the [AWS PrivateLink whitepaper](https://docs.aws.amazon.com/pdfs/whitepapers/latest/aws-privatelink/aws-privatelink.pdf).

You will learn, at a high level, the resources necessary to implement this solution. Cloud environments and provisioning processes vary greatly, so information from this guide may need to be adapted to fit your requirements.

## PrivateLink connection overview

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/privatelink-vcs-architecture.png" width="80%" title="High level overview of the dbt Cloud and AWS PrivateLink for VCS architecture" />

### Required resources for creating a connection

Creating an Interface VPC PrivateLink connection requires creating multiple AWS resources in your AWS account(s) and private network containing the self-hosted VCS instance. You are responsible for provisioning and maintaining these resources. Once provisioned, connection information and permissions are shared with dbt Labs to complete the connection, allowing for direct VPC to VPC private connectivity.

This approach is distinct from and does not require you to implement VPC peering between your AWS account(s) and dbt Cloud.

You need these resource to create a PrivateLink connection, which allows the dbt Cloud application to connect to your self-hosted cloud VCS. These resources can be created via the AWS Console, AWS CLI, or Infrastructure-as-Code such as [Terraform](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) or [AWS CloudFormation](https://aws.amazon.com/cloudformation/).

- **Target Group(s)** - A [Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html) is attached to a [Listener](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html) on the NLB and is responsible for routing incoming requests to healthy targets in the group. If connecting to the VCS system over both SSH and HTTPS, two **Target Groups** will need to be created.
- **Target Type (choose most applicable):**
- **Instance/ASG:** Select existing EC2 instance(s) where the VCS system is running, or [an autoscaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html) (ASG) to automatically attach any instances launched from that ASG.
- **Application Load Balancer (ALB):** Select an ALB that already has VCS EC2 instances attached (HTTP/S traffic only).
- **IP Addresses:** Select the IP address(es) of the EC2 instances where the VCS system is installed. Keep in mind that the IP of the EC2 instance can change if the instance is relaunched for any reason.
- **Protocol/Port:** Choose one protocol and port pair per Target Group, for example:
- TG1 - SSH: TCP/22
- TG2 - HTTPS: TCP/443 or TLS if you want to attach a certificate to decrypt TLS connections ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html)).
- **VPC:** Choose the VPC in which the VPC Endpoint Service and NLB will be created.
- **Health checks:** Targets must register as healthy in order for the NLB to forward requests. Configure a health check that’s appropriate for your service and the protocol of the Target Group ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html)).
- **Register targets:** Register the targets (see above) for the VCS service ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-register-targets.html)). _It's critical to be sure targets are healthy before attempting connection from dbt Cloud._
- **Network Load Balancer (NLB)** - Requires creating a Listener that attaches to the newly created Target Group(s) for port `443` and/or `22`, as applicable.
- **Scheme:** Internal
- **IP address type:** IPv4
- **Network mapping:** Choose the VPC that the VPC Endpoint Service and NLB are being deployed in, and choose subnets from at least two Availability Zones.
- **Listeners:** Create one Listener per Target Group that maps the appropriate incoming port to the corresponding Target Group ([details](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html)).
- **Endpoint Service** - The VPC Endpoint Service is what allows for the VPC to VPC connection, routing incoming requests to the configured load balancer.
- **Load balancer type:** Network.
- **Load balancer:** Attach the NLB created in the previous step.
- **Acceptance required (recommended)**: When enabled, requires a new connection request to the VPC Endpoint Service to be accepted by the customer before connectivity is allowed ([details](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#accept-reject-connection-requests)).

Once these resources have been provisioned, access needs to be granted for the dbt Labs AWS account to create a VPC Endpoint in our VPC. On the newly created VPC Endpoint Service, add a new [Allowed Principal](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) for the appropriate dbt Labs principal:

- **AWS Account ID:** `arn:aws:iam::<account id>:root` (contact your dbt Labs account representative for appropriate account ID).

### Completing the connection

To complete the connection, dbt Labs must now provision a VPC Endpoint to connect to your VPC Endpoint Service. This requires you send the following information:

- VPC Endpoint Service name:

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vpc-endpoint-service-name.png" width="80%" title="Location of the VPC Endpoint Service name in the AWS console" />

- **DNS configuration:** If the connection to the VCS service requires a custom domain and/or URL for TLS, a private hosted zone can be configured by the dbt Labs Infrastructure team in the dbt Cloud private network. For example:
- **Private hosted zone:** `examplecorp.com`
- **DNS record:** `github.examplecorp.com`

### Accepting the connection request

When you have been notified that the resources are provisioned within the dbt Cloud environment, you must accept the endpoint connection (unless the VPC Endpoint Service is set to auto-accept connection requests). Requests can be accepted through the AWS console, as seen below, or through the AWS CLI.

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/accept-request.png" width="80%" title="Accept the connection request" />

Once you accept the endpoint connection request, you can use the PrivateLink endpoint in dbt Cloud.

## Configure in dbt Cloud

Once dbt confirms that the PrivateLink integration is complete, you can use it in a new or existing git configuration.
1. Select **PrivateLink Endpoint** as the connection type, and your configured integrations will appear in the dropdown menu.
2. Select the configured endpoint from the drop down list.
3. Click **Save**.

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-new.png" width="80%" title="Configuring a new git integration with PrivateLink" />

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/vcs-setup-existing.png" width="80%" title="Editing an existing git integration with PrivateLink" />
6 changes: 4 additions & 2 deletions website/docs/docs/collaborate/explore-projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: "Explore your dbt projects"
sidebar_label: "Explore dbt projects"
description: "Learn about dbt Explorer and how to interact with it to understand, improve, and leverage your data pipelines."
pagination_next: "docs/collaborate/explore-multiple-projects"
pagination_next: "docs/collaborate/model-performance"
pagination_prev: null
---

Expand Down Expand Up @@ -36,7 +36,7 @@ For a richer experience with dbt Explorer, you must:
- Run [dbt source freshness](/reference/commands/source#dbt-source-freshness) within a job in the environment to view source freshness data.
- Run [dbt snapshot](/reference/commands/snapshot) or [dbt build](/reference/commands/build) within a job in the environment to view snapshot details.

Richer and more timely metadata will become available as dbt, the Discovery API, and the underlying dbt Cloud platform evolves.
Richer and more timely metadata will become available as dbt Core, the Discovery API, and the underlying dbt Cloud platform evolves.

## Explore your project's lineage graph {#project-lineage}

Expand All @@ -46,6 +46,8 @@ If you don't see the project lineage graph immediately, click **Render Lineage**

The nodes in the lineage graph represent the project’s resources and the edges represent the relationships between the nodes. Nodes are color-coded and include iconography according to their resource type.

By default, dbt Explorer shows the project's [applied state](/docs/dbt-cloud-apis/project-state#definition-logical-vs-applied-state-of-dbt-nodes) lineage. That is, it shows models that have been successfully built and are available to query, not just the models defined in the project.

To explore the lineage graphs of tests and macros, view [their resource details pages](#view-resource-details). By default, dbt Explorer excludes these resources from the full lineage graph unless a search query returns them as results.

To interact with the full lineage graph, you can:
Expand Down
41 changes: 41 additions & 0 deletions website/docs/docs/collaborate/model-performance.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
title: "Model performance"
sidebar_label: "Model performance"
description: "Learn about ."
---

dbt Explorer provides metadata on dbt Cloud runs for in-depth model performance and quality analysis. This feature assists in reducing infrastructure costs and saving time for data teams by highlighting where to fine-tune projects and deployments &mdash; such as model refactoring or job configuration adjustments.

<LoomVideo id='98f33b3b7a374df0b7c04747eae6ef44' />

:::tip Beta

The model performance beta feature is now available in dbt Explorer! Check it out!
:::

## The Performance overview page

You can pinpoint areas for performance enhancement by using the Performance overview page. This page presents a comprehensive analysis across all project models and displays the longest-running models, those most frequently executed, and the ones with the highest failure rates during runs/tests. Data can be segmented by environment and job type which can offer insights into:

- Most executed models (total count).
- Models with the longest execution time (average duration).
- Models with the most failures, detailing run failures (percentage and count) and test failures (percentage and count).

Each data point links to individual models in Explorer.

<Lightbox src="/img/docs/collaborate/dbt-explorer/example-performance-overview-page.png" width="80%" title="Example of Performance overview page"/>

You can view historical metadata for up to the past three months. Select the time horizon using the filter, which defaults to a two-week lookback.

<Lightbox src="/img/docs/collaborate/dbt-explorer/ex-2-week-default.png" title="Example of dropdown"/>

## The Model performance tab

You can view trends in execution times, counts, and failures by using the Model performance tab for historical performance analysis. Daily execution data includes:

- Average model execution time.
- Model execution counts, including failures/errors (total sum).

Clicking on a data point reveals a table listing all job runs for that day, with each row providing a direct link to the details of a specific run.

<Lightbox src="/img/docs/collaborate/dbt-explorer/example-model-performance-tab.png" title="Example of the Model performance tab"/>
Loading

0 comments on commit 96c815d

Please sign in to comment.