Skip to content

Commit

Permalink
Merge branch 'current' into nfiann-updated-at
Browse files Browse the repository at this point in the history
  • Loading branch information
nataliefiann authored Sep 26, 2024
2 parents 895c3fb + dc237d3 commit e8abadf
Show file tree
Hide file tree
Showing 58 changed files with 321 additions and 232 deletions.
23 changes: 20 additions & 3 deletions website/docs/docs/build/environment-variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,10 +139,14 @@ _The following variables are currently only available for GitHub, GitLab, and Az

Environment variables can be used in many ways, and they give you the power and flexibility to do what you want to do more easily in dbt Cloud.

#### Clone private packages
<Expandable alt_header="Clone private packages">

Now that you can set secrets as environment variables, you can pass git tokens into your package HTTPS URLs to allow for on-the-fly cloning of private repositories. Read more about enabling [private package cloning](/docs/build/packages#private-packages).

#### Dynamically set your warehouse in your Snowflake connection
</Expandable>

<Expandable alt_header="Dynamically set your warehouse in your Snowflake connection">

Environment variables make it possible to dynamically change the Snowflake virtual warehouse size depending on the job. Instead of calling the warehouse name directly in your project connection, you can reference an environment variable which will get set to a specific virtual warehouse at runtime.

For example, suppose you'd like to run a full-refresh job in an XL warehouse, but your incremental job only needs to run in a medium-sized warehouse. Both jobs are configured in the same dbt Cloud environment. In your connection configuration, you can use an environment variable to set the warehouse name to `{{env_var('DBT_WAREHOUSE')}}`. Then in the job settings, you can set a different value for the `DBT_WAREHOUSE` environment variable depending on the job's workload.
Expand All @@ -163,7 +167,10 @@ However, there are some limitations when using env vars with Snowflake OAuth Con
Something to note, if you supply an environment variable in the account/host field, Snowflake OAuth Connection will **fail** to connect. This happens because the field doesn't pass through Jinja rendering, so dbt Cloud simply passes the literal `env_var` code into a URL string like `{{ env_var("DBT_ACCOUNT_HOST_NAME") }}.snowflakecomputing.com`, which is an invalid hostname. Use [extended attributes](/docs/deploy/deploy-environments#deployment-credentials) instead.
:::

#### Audit your run metadata
</Expandable>

<Expandable alt_header="Audit your run metadata">

Here's another motivating example that uses the dbt Cloud run ID, which is set automatically at each run. This additional data field can be used for auditing and debugging:

```sql
Expand All @@ -189,3 +196,13 @@ select *,

from users_aggregated
```

</Expandable>

<Expandable alt_header="Configure Semantic Layer credentials">

import SLEnvVars from '/snippets/_sl-env-vars.md';

<SLEnvVars/>

</Expandable>
10 changes: 7 additions & 3 deletions website/docs/docs/build/metricflow-time-spine.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,10 @@ MetricFlow requires you to define a time-spine table as a model-level configurat
- [Conversion metrics](/docs/build/conversion)
- [Slowly Changing Dimensions](/docs/build/dimensions#scd-type-ii)
- [Metrics](/docs/build/metrics-overview) with the `join_to_timespine` configuration set to true

To see the generated SQL for the metric and dimension types that use time-spine joins, refer to the respective documentation or add the `compile=True` flag when querying the Semantic Layer to return the compiled SQL.

#### Configuring time-spine
## Configuring time-spine in YAML
- You only need to configure time-spine models that the Semantic Layer should recognize.
- At a minimum, define a time-spine table for a daily grain.
- You can optionally define a time-spine table for a different granularity, like hourly.
Expand Down Expand Up @@ -66,6 +67,9 @@ The example creates a time spine at a daily grain and an hourly grain. A few thi
* You can add a time spine for each granularity you intend to use if query efficiency is more important to you than configuration time, or storage constraints. For most engines, the query performance difference should be minimal and transforming your time spine to a coarser grain at query time shouldn't add significant overhead to your queries.
* We recommend having a time spine at the finest grain used in any of your dimensions to avoid unexpected errors. i.e., if you have dimensions at an hourly grain, you should have a time spine at an hourly grain.

## Example time-spine tables

### Daily
<File name="metricflow_time_spine.sql">

<VersionBlock lastVersion="1.6">
Expand Down Expand Up @@ -134,7 +138,7 @@ and date_hour < dateadd(day, 30, current_timestamp())
```
</VersionBlock>


### Daily (BigQuery)
Use this model if you're using BigQuery. BigQuery supports `DATE()` instead of `TO_DATE()`:
<VersionBlock lastVersion="1.6">

Expand Down Expand Up @@ -197,7 +201,7 @@ and date_hour < dateadd(day, 30, current_timestamp())

</File>

## Hourly time spine
### Hourly
<File name='time_spine_hourly.sql'>

```sql
Expand Down
4 changes: 1 addition & 3 deletions website/docs/docs/cloud-integrations/semantic-layer/excel.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ tags: [Semantic Layer]
sidebar_label: "Microsoft Excel"
---

# Microsoft Excel <Lifecycle status='preview'/>

The dbt Semantic Layer offers a seamless integration with Excel Online and Desktop through a custom menu. This add-on allows you to build dbt Semantic Layer queries and return data on your metrics directly within Excel.

## Prerequisites
Expand All @@ -25,7 +23,7 @@ import SLCourses from '/snippets/_sl-course.md';

## Installing the add-on

The dbt Semantic Layer Microsoft Excel integration is available to download directly on [Microsoft AppSource](https://appsource.microsoft.com/en-us/marketplace/apps?product=office). You can choose to download this add in for both [Excel Desktop](https://pages.store.office.com/addinsinstallpage.aspx?assetid=WA200007100&rs=en-US&correlationId=4132ecd1-425d-982d-efb4-de94ebc83f26) and [Excel Online](https://pages.store.office.com/addinsinstallpage.aspx?assetid=WA200007100&rs=en-US&correlationid=4132ecd1-425d-982d-efb4-de94ebc83f26&isWac=True)
The dbt Semantic Layer Microsoft Excel integration is available to download directly on [Microsoft AppSource](https://appsource.microsoft.com/en-us/product/office/WA200007100?tab=Overview). You can choose to download this add-on in for both [Excel Desktop](https://pages.store.office.com/addinsinstallpage.aspx?assetid=WA200007100&rs=en-US&correlationId=4132ecd1-425d-982d-efb4-de94ebc83f26) and [Excel Online](https://pages.store.office.com/addinsinstallpage.aspx?assetid=WA200007100&rs=en-US&correlationid=4132ecd1-425d-982d-efb4-de94ebc83f26&isWac=True)

1. In Excel, authenticate with your host, dbt Cloud environment ID, and service token.
- Access your Environment ID, Host, and URLs in your dbt Cloud Semantic Layer settings. Generate a service token in the Semantic Layer settings or API tokens settings
Expand Down
41 changes: 24 additions & 17 deletions website/docs/docs/cloud/configure-cloud-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,21 +52,29 @@ Once you install the dbt Cloud CLI, you need to configure it to connect to a dbt

The config file looks like this:

```yaml
version: "1"
context:
active-project: "<project id from the list below>"
active-host: "<active host from the list>"
defer-env-id: "<optional defer environment id>"
projects:
- project-id: "<project-id>"
account-host: "<account-host>"
api-key: "<user-api-key>"

- project-id: "<project-id>"
account-host: "<account-host>"
api-key: "<user-api-key>"
```
```yaml
version: "1"
context:
active-project: "<project id from the list below>"
active-host: "<active host from the list>"
defer-env-id: "<optional defer environment id>"
projects:
- project-name: "<project-name>"
project-id: "<project-id>"
account-name: "<account-name>"
account-id: "<account-id>"
account-host: "<account-host>" # for example, "cloud.getdbt.com"
token-name: "<pat-or-service-token-name>"
token-value: "<pat-or-service-token-value>"

- project-name: "<project-name>"
project-id: "<project-id>"
account-name: "<account-name>"
account-id: "<account-id>"
account-host: "<account-host>" # for example, "cloud.getdbt.com"
token-name: "<pat-or-service-token-name>"
token-value: "<pat-or-service-token-value>"
```
3. After downloading the config file and creating your directory, navigate to a dbt project in your terminal:
Expand Down Expand Up @@ -100,7 +108,6 @@ To set environment variables in the dbt Cloud CLI for your dbt project:
2. Then select **Profile Settings**, then **Credentials**.
3. Click on your project and scroll to the **Environment Variables** section.
4. Click **Edit** on the lower right and then set the user-level environment variables.
- Note, when setting up the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl), using [environment variables](/docs/build/environment-variables) like `{{env_var('DBT_WAREHOUSE')}}` is not supported. You should use the actual credentials instead.

## Use the dbt Cloud CLI

Expand Down Expand Up @@ -195,4 +202,4 @@ This command moves the `dbt_cloud.yml` from the `Downloads` folder to the `.dbt`
By default, [all artifacts](/reference/artifacts/dbt-artifacts) are downloaded when you execute dbt commands from the dbt Cloud CLI. To skip these files from being downloaded, add `--download-artifacts=false` to the command you want to run. This can help improve run-time performance but might break workflows that depend on assets like the [manifest](/reference/artifacts/manifest-json).


</Expandable>
</Expandable>
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,8 @@ Starting July 2024, connection management has moved from the project level to th

<Lightbox src="/img/docs/dbt-cloud/cloud-configuring-dbt-cloud/connections-legacy-model.png" width="55%" title="Previous connection model"/>

The following connection management section describes these changes.
Connections created with APIs before this change cannot be accessed with the [latest APIs](https://docs.getdbt.com/dbt-cloud/api-v3#/operations/List%20Account%20Connections). dbt Labs recommends [recreating the connections](https://docs.getdbt.com/dbt-cloud/api-v3#/operations/Create%20Account%20Connection) with the latest APIs.


:::

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ Nice job, you're ready to start developing and building models 🎉!
### Considerations
- To improve your experience using dbt Cloud, we suggest that you turn off ad blockers. This is because some project file names, such as `google_adwords.sql`, might resemble ad traffic and trigger ad blockers.
- To preserve performance, there's a file size limitation for repositories over 6 GB. If you have a repo over 6 GB, please contact [dbt Support](mailto:[email protected]) before running dbt Cloud.
- The IDE's idle session timeout is one hour.
- <Expandable alt_header="About the start up process and work retention">

### Start-up process
Expand Down
Loading

0 comments on commit e8abadf

Please sign in to comment.