diff --git a/README.md b/README.md index c749fedf95a..d306651f545 100644 --- a/README.md +++ b/README.md @@ -62,18 +62,3 @@ You can click a link available in a Vercel bot PR comment to see and review your Advisory: - If you run into an `fatal error: 'vips/vips8' file not found` error when you run `npm install`, you may need to run `brew install vips`. Warning: this one will take a while -- go ahead and grab some coffee! - -## Running the Cypress tests locally - -Method 1: Utilizing the Cypress GUI -1. `cd` into the repo: `cd docs.getdbt.com` -2. `cd` into the `website` subdirectory: `cd website` -3. Install the required node packages: `npm install` -4. Run `npx cypress open` to open the Cypress GUI, and choose `E2E Testing` as the Testing Type, before finally selecting your browser and clicking `Start E2E testing in {broswer}` -5. Click on a test and watch it run! - -Method 2: Running the Cypress E2E tests headlessly -1. `cd` into the repo: `cd docs.getdbt.com` -2. `cd` into the `website` subdirectory: `cd website` -3. Install the required node packages: `npm install` -4. Run `npx cypress run` diff --git a/contributing/developer-blog.md b/contributing/developer-blog.md deleted file mode 100644 index 0d9b3becba2..00000000000 --- a/contributing/developer-blog.md +++ /dev/null @@ -1,67 +0,0 @@ - -* [Contributing](#contributing) -* [Core Principles](#core-principles) - -## Contributing - -The dbt Developer Blog is a place where analytics practitioners can go to share their knowledge with the community. Analytics Engineering is a discipline we’re all building together. The developer blog exists to cultivate the collective knowledge that exists on how to build and scale effective data teams. - -We currently have editorial capacity for a few Community contributed developer blogs per quarter - if we are oversubscribed we suggest you post on another platform or hold off until the editorial team is ready to take on more posts. - -### What makes a good developer blog post? - -- The short answer: Practical, hands on analytics engineering tutorials and stories - - [Slim CI/CD with Bitbucket](https://docs.getdbt.com/blog/slim-ci-cd-with-bitbucket-pipelines) - - [So You Want to Build a dbt Package](https://docs.getdbt.com/blog/so-you-want-to-build-a-package) - - [Founding an Analytics Engineering Team](https://docs.getdbt.com/blog/founding-an-analytics-engineering-team-smartsheet) -- See the [Developer Blog Core Principles](#core-principles) - -### How do I submit a proposed post? - -To submit a proposed post, open a `Contribute to the dbt Developer Blog` issue on the [Developer Hub repo](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose). You will be asked for: - -- A short (one paragraph) summary of the post you’d like to publish -- An outline of the post - -You’ll hear back from a member of the dbt Labs teams within 7 days with one of three responses: - -- The post looks good to go as is! We’ll ask you to start creating a draft based off of the initial outline you submitted -- Proposed changes to the outline. This could be additional focus on a topic you mention that’s of high community interest or a tweak to the structure to help with narrative flow -- Not a fit for the developer blog right now. We hugely appreciate *any* interest in submitting to the Developer Blog - right now our biggest backlog is capacity to help folks get these published. See below on how we are thinking about and evaluating potential posts. - -### What is the process once my blog is accepted? - -Once a blog is accepted, we’ll ask you for a date when we can expect the draft by. Typically we’ll ask that you can commit to having this ready within a month of submitting the issue. - -Once you submit a draft, we’ll return a first set of edits within 5 business days. - -The typical turnaround time from issue creation to going live on the developer blog is ~4 to 6 weeks. - -### What happens after my blog is published? - -We’ll share the blog on the dbt Labs social media channels! We also encourage you to share on the dbt Slack in #i-made-this. - -### What if my post doesn’t get approved? - -We want to publish as many community contributors as possible, but not every post will be a fit for the Developer Blog. That’s ok! There are many different reasons why we might not be able to publish a post right now and none of them reflect on the quality of the proposed post. - -- **dbt Labs capacity**: We’re committed to providing hands-on feedback and coaching throughout the process. Our goal is not just to generate great developer blogs - it’s to help build a community of great writers / practitioners who can share their knowledge with the community for years to come. This necessarily means we will be able to take on a lower absolute number of posts in the short term, but will hopefully be helpful for the community long term. -- **Focus on narrative / problem solving - not industry trends**: The developer blog exists, primarily, to tell the stories of analytics engineering practitioners and how they solve problems. The idea is that reading the developer blog gives a feel for what it is like to be a data practitioner on the ground today. This is not a hard and fast rule, but a good way to approach this is “How I/we solved X problem” rather than “How everyone should solve X problem”. - -We are very interested in stacks, new tools and integrations and will happily publish posts about this - with the caveat that the *focus* of the post should be solving real world problems. Hopefully if you are writing about these, this is something that you have used yourself in a hands on, production implementation. - -- **Right sized scope**: We want to be able to cover a topic in-depth and dig into the nuances. Big topics like “How should you structure your data team” or “How to ensure data quality in your organization” will be tough to cover in the scope of a single post. If you have a big idea - try subdividing it! “How should you structure your data team” could become “How we successfully partnered with our RevOps team on improving lead tracking” and “How to ensure data quality in your organization” might be “How we cleaned up our utm tracking”. - -### What if I need help / have questions: - -- Feel free to post any questions in #community-writers on the dbt Slack. - -## Core Principles - -- 🧑🏻‍🤝‍🧑🏾 The dbt Developer blog is written by humans **- individual analytics professionals sharing their insight with the world. To the extent feasible, a community member posting on the developer blog is not staking an official organizational stance, but something that *they* have learned or believe based on their work. This is true for dbt Labs employees as well. -- 💍 Developer blog content is knowledge rich - these are posts that readers share, bookmark and come back to time and time again. -- ⛹🏼‍♂️ Developer blog content is written by and for *practitioners* - end users of analytics tools (and sometimes people that work with practitioners). -- ⭐ Developer blog content is best when it is *the story which the author is uniquely positioned to tell.* Authors are encouraged to consider what insight they have that is specific to them and the work they have done. -- 🏎️ Developer blog content is actionable - readers walk away with a clear sense of how they can use this information to be a more effective practitioner. Posts include code snippets, Loom walkthroughs and hands-on, practical information that can be integrated into daily workflows. -- 🤏 Nothing is too small to share - what you think is simple has the potential to change someone's week. -- 🔮 Developer blog content is present focused —posts tell a story of a thing that you've already done or are actively doing, not something that you may do in the future. diff --git a/website/dbt-versions.js b/website/dbt-versions.js index 42ad3377508..825af8ac6ee 100644 --- a/website/dbt-versions.js +++ b/website/dbt-versions.js @@ -74,12 +74,5 @@ exports.versionedPages = [ * @property {string} firstVersion The first version the category is visible in the sidebar */ exports.versionedCategories = [ - { - category: "Model governance", - firstVersion: "1.5", - }, - { - category: "Build your metrics", - firstVersion: "1.6", - }, + ]; diff --git a/website/docs/docs/build/dimensions.md b/website/docs/docs/build/dimensions.md index 170626ee7cc..5026f4c45cd 100644 --- a/website/docs/docs/build/dimensions.md +++ b/website/docs/docs/build/dimensions.md @@ -67,7 +67,7 @@ semantic_models: type: categorical ``` -Dimensions are bound to the primary entity of the semantic model they are defined in. For example the dimensoin `type` is defined in a model that has `transaction` as a primary entity. `type` is scoped to the `transaction` entity, and to reference this dimension you would use the fully qualified dimension name i.e `transaction__type`. +Dimensions are bound to the primary entity of the semantic model they are defined in. For example the dimension `type` is defined in a model that has `transaction` as a primary entity. `type` is scoped to the `transaction` entity, and to reference this dimension you would use the fully qualified dimension name i.e `transaction__type`. MetricFlow requires that all semantic models have a primary entity. This is to guarantee unique dimension names. If your data source doesn't have a primary entity, you need to assign the entity a name using the `primary_entity` key. It doesn't necessarily have to map to a column in that table and assigning the name doesn't affect query generation. We recommend making these "virtual primary entities" unique across your semantic model. An example of defining a primary entity for a data source that doesn't have a primary entity column is below: diff --git a/website/docs/docs/build/incremental-microbatch.md b/website/docs/docs/build/incremental-microbatch.md index 6d80007e2d8..e1c39e6ae47 100644 --- a/website/docs/docs/build/incremental-microbatch.md +++ b/website/docs/docs/build/incremental-microbatch.md @@ -8,7 +8,7 @@ id: "incremental-microbatch" :::info Microbatch -The `microbatch` strategy is available in beta for [dbt Cloud Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) and dbt Core v1.9. We have been developing it behind a flag to prevent unintended interactions with existing custom incremental strategies. To enable this feature, set the environment variable `DBT_EXPERIMENTAL_MICROBATCH` to `True` in your dbt Cloud environments or wherever you're running dbt Core. +The `microbatch` strategy is available in beta for [dbt Cloud Versionless](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless) and dbt Core v1.9. We have been developing it behind a flag to prevent unintended interactions with existing custom incremental strategies. To enable this feature, [set the environment variable](/docs/build/environment-variables#setting-and-overriding-environment-variables) `DBT_EXPERIMENTAL_MICROBATCH` to `True` in your dbt Cloud environments or wherever you're running dbt Core. Read and participate in the discussion: [dbt-core#10672](https://github.com/dbt-labs/dbt-core/discussions/10672) diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md index 2968496290a..a56246addf3 100644 --- a/website/docs/docs/build/incremental-models.md +++ b/website/docs/docs/build/incremental-models.md @@ -212,11 +212,11 @@ Currently, `on_schema_change` only tracks top-level column changes. It does not ### Default behavior -This is the behavior if `on_schema_change: ignore`, which is set by default, and on older versions of dbt. +This is the behavior of `on_schema_change: ignore`, which is set by default. If you add a column to your incremental model, and execute a `dbt run`, this column will _not_ appear in your target table. -Similarly, if you remove a column from your incremental model, and execute a `dbt run`, this column will _not_ be removed from your target table. +If you remove a column from your incremental model and execute a `dbt run`, `dbt run` will fail. Instead, whenever the logic of your incremental changes, execute a full-refresh run of both your incremental model and any downstream models. diff --git a/website/docs/docs/build/incremental-strategy.md b/website/docs/docs/build/incremental-strategy.md index 30de135b09b..1fb35ba637c 100644 --- a/website/docs/docs/build/incremental-strategy.md +++ b/website/docs/docs/build/incremental-strategy.md @@ -27,7 +27,7 @@ Click the name of the adapter in the below table for more information about supp | Data platform adapter | `append` | `merge` | `delete+insert` | `insert_overwrite` | `microbatch` | |-----------------------|:--------:|:-------:|:---------------:|:------------------:|:-------------------:| | [dbt-postgres](/reference/resource-configs/postgres-configs#incremental-materialization-strategies) | ✅ | ✅ | ✅ | | ✅ | -| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | ✅ | ✅ | ✅ | | | +| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | ✅ | ✅ | ✅ | | ✅ | | [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | | ✅ | | ✅ | ✅ | | [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) | ✅ | ✅ | | ✅ | ✅ | | [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | ✅ | ✅ | | ✅ | | diff --git a/website/docs/docs/build/measures.md b/website/docs/docs/build/measures.md index 977b630fada..d60aa3f7e21 100644 --- a/website/docs/docs/build/measures.md +++ b/website/docs/docs/build/measures.md @@ -200,7 +200,7 @@ Parameters under the `non_additive_dimension` will specify dimensions that the m ```yaml semantic_models: - - name: subscription_id + - name: subscriptions description: A subscription table with one row per date for each active user and their subscription plans. model: ref('your_schema.subscription_table') defaults: @@ -209,7 +209,7 @@ semantic_models: entities: - name: user_id type: foreign - primary_entity: subscription_table + primary_entity: subscription dimensions: - name: subscription_date @@ -224,21 +224,21 @@ semantic_models: expr: user_id agg: count_distinct non_additive_dimension: - name: metric_time + name: subscription_date window_choice: max - name: mrr description: Aggregate by summing all users' active subscription plans expr: subscription_value agg: sum non_additive_dimension: - name: metric_time + name: subscription_date window_choice: max - name: user_mrr description: Group by user_id to achieve each user's MRR expr: subscription_value agg: sum non_additive_dimension: - name: metric_time + name: subscription_date window_choice: max window_groupings: - user_id @@ -255,15 +255,15 @@ We can query the semi-additive metrics using the following syntax: For dbt Cloud: ```bash -dbt sl query --metrics mrr_by_end_of_month --group-by metric_time__month --order metric_time__month -dbt sl query --metrics mrr_by_end_of_month --group-by metric_time__week --order metric_time__week +dbt sl query --metrics mrr_by_end_of_month --group-by subscription__subscription_date__month --order subscription__subscription_date__month +dbt sl query --metrics mrr_by_end_of_month --group-by subscription__subscription_date__week --order subscription__subscription_date__week ``` For dbt Core: ```bash -mf query --metrics mrr_by_end_of_month --group-by metric_time__month --order metric_time__month -mf query --metrics mrr_by_end_of_month --group-by metric_time__week --order metric_time__week +mf query --metrics mrr_by_end_of_month --group-by subscription__subscription_date__month --order subscription__subscription_date__month +mf query --metrics mrr_by_end_of_month --group-by subscription__subscription_date__week --order subscription__subscription_date__week ``` import SetUpPages from '/snippets/_metrics-dependencies.md'; diff --git a/website/docs/docs/build/metricflow-time-spine.md b/website/docs/docs/build/metricflow-time-spine.md index 50d1d68d0bd..5f16af38023 100644 --- a/website/docs/docs/build/metricflow-time-spine.md +++ b/website/docs/docs/build/metricflow-time-spine.md @@ -150,7 +150,7 @@ final as ( select * from final where date_day > dateadd(year, -4, current_timestamp()) -and date_hour < dateadd(day, 30, current_timestamp()) +and date_day < dateadd(day, 30, current_timestamp()) ``` ### Daily (BigQuery) @@ -180,7 +180,7 @@ select * from final -- filter the time spine to a specific range where date_day > dateadd(year, -4, current_timestamp()) -and date_hour < dateadd(day, 30, current_timestamp()) +and date_day < dateadd(day, 30, current_timestamp()) ``` @@ -265,7 +265,7 @@ final as ( select * from final where date_day > dateadd(year, -4, current_timestamp()) -and date_hour < dateadd(day, 30, current_timestamp()) +and date_day < dateadd(day, 30, current_timestamp()) ``` @@ -296,7 +296,7 @@ select * from final -- filter the time spine to a specific range where date_day > dateadd(year, -4, current_timestamp()) -and date_hour < dateadd(day, 30, current_timestamp()) +and date_day < dateadd(day, 30, current_timestamp()) ``` diff --git a/website/docs/docs/build/snapshots.md b/website/docs/docs/build/snapshots.md index f5321aa626a..dd7a44fd48c 100644 --- a/website/docs/docs/build/snapshots.md +++ b/website/docs/docs/build/snapshots.md @@ -390,29 +390,6 @@ snapshots: -## Snapshot query best practices - -This section outlines some best practices for writing snapshot queries: - -- #### Snapshot source data - Your models should then select from these snapshots, treating them like regular data sources. As much as possible, snapshot your source data in its raw form and use downstream models to clean up the data - -- #### Use the `source` function in your query - This helps when understanding data lineage in your project. - -- #### Include as many columns as possible - In fact, go for `select *` if performance permits! Even if a column doesn't feel useful at the moment, it might be better to snapshot it in case it becomes useful – after all, you won't be able to recreate the column later. - -- #### Avoid joins in your snapshot query - Joins can make it difficult to build a reliable `updated_at` timestamp. Instead, snapshot the two tables separately, and join them in downstream models. - -- #### Limit the amount of transformation in your query - If you apply business logic in a snapshot query, and this logic changes in the future, it can be impossible (or, at least, very difficult) to apply the change in logic to your snapshots. - -Basically – keep your query as simple as possible! Some reasonable exceptions to these recommendations include: -* Selecting specific columns if the table is wide. -* Doing light transformation to get data into a reasonable shape, for example, unpacking a blob to flatten your source data into columns. - ## Snapshot meta-fields Snapshot tables will be created as a clone of your source dataset, plus some additional meta-fields*. @@ -498,7 +475,9 @@ Snapshot results: -This section is for users on dbt versions 1.8 and earlier. To configure snapshots in versions 1.9 and later, refer to [Configuring snapshots](#configuring-snapshots). The latest versions use an updated snapshot configuration syntax that optimizes performance. +For information about configuring snapshots in dbt versions 1.8 and earlier, select **1.8** from the documentation version picker, and it will appear in this section. + +To configure snapshots in versions 1.9 and later, refer to [Configuring snapshots](#configuring-snapshots). The latest versions use an updated snapshot configuration syntax that optimizes performance. diff --git a/website/docs/docs/cloud/connect-data-platform/about-connections.md b/website/docs/docs/cloud/connect-data-platform/about-connections.md index 89dd13808ec..6497e86de89 100644 --- a/website/docs/docs/cloud/connect-data-platform/about-connections.md +++ b/website/docs/docs/cloud/connect-data-platform/about-connections.md @@ -88,7 +88,7 @@ Please consider the following actions, as the steps you take will depend on the - Normalization - - Undertsand how new connections should be created to avoid local overrides. If you currently use extended attributes to override the warehouse instance in your production environment - you should instead create a new connection for that instance, and wire your production environment to it, removing the need for the local overrides + - Understand how new connections should be created to avoid local overrides. If you currently use extended attributes to override the warehouse instance in your production environment - you should instead create a new connection for that instance, and wire your production environment to it, removing the need for the local overrides - Create new connections, update relevant environments to target these connections, removing now unecessary local overrides (which may not be all of them!) - Test the new wiring by triggering jobs or starting IDE sessions diff --git a/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md b/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md index 4719095b87f..5be802cae77 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-redshift-postgresql-alloydb.md @@ -118,7 +118,7 @@ Once the connection is saved, a public key will be generated and displayed for t To configure the SSH tunnel in dbt Cloud, you'll need to provide the hostname/IP of your bastion server, username, and port, of your choosing, that dbt Cloud will connect to. Review the following steps: - Verify the bastion server has its network security rules set up to accept connections from the [dbt Cloud IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) on whatever port you configured. -- Set up the user account by using the bastion servers instance's CLI, The following example uses the username `dbtcloud:` +- Set up the user account by using the bastion servers instance's CLI, The following example uses the username `dbtcloud`: ```shell sudo groupadd dbtcloud diff --git a/website/docs/docs/cloud/manage-access/audit-log.md b/website/docs/docs/cloud/manage-access/audit-log.md index 4d07afe2cde..a7be86a7f99 100644 --- a/website/docs/docs/cloud/manage-access/audit-log.md +++ b/website/docs/docs/cloud/manage-access/audit-log.md @@ -62,7 +62,7 @@ The audit log supports various events for different objects in dbt Cloud. You wi | Auth Provider Changed | auth_provider.Changed | Authentication provider settings changed | | Credential Login Succeeded | auth.CredentialsLoginSucceeded | User successfully logged in with username and password | | SSO Login Failed | auth.SsoLoginFailed | User login via SSO failed | -| SSO Login Succeeded | auth.SsoLoginSucceeded | User successfully logged in via SSO +| SSO Login Succeeded | auth.SsoLoginSucceeded | User successfully logged in via SSO | ### Environment @@ -93,7 +93,7 @@ The audit log supports various events for different objects in dbt Cloud. You wi | ------------- | ----------------------------- | ------------------------------ | | Group Added | user_group.Added | New Group successfully created | | Group Changed | user_group.Changed | Group settings changed | -| Group Removed | user_group.Changed | Group successfully removed | +| Group Removed | user_group.Removed | Group successfully removed | ### User @@ -149,12 +149,65 @@ The audit log supports various events for different objects in dbt Cloud. You wi ### Credentials -| Event Name | Event Type | Description | -| -------------------------------- | ----------------------------- | -------------------------------- | +| Event Name | Event Type | Description | +| -------------------------------- | ----------------------------- | -----------------------| | Credentials Added to Project | credentials.Added | Project credentials added | | Credentials Changed in Project | credentials.Changed | Credentials changed in project | | Credentials Removed from Project | credentials.Removed | Credentials removed from project | + +### Git integration + +| Event Name | Event Type | Description | +| -------------------------------- | ----------------------------- | -----------------------| +| GitLab Application Changed | gitlab_application.changed | GitLab configuration in dbt Cloud changed | + +### Webhooks + +| Event Name | Event Type | Description | +| -------------------------------- | ----------------------------- | -----------------------| +| Webhook Subscriptions Added | webhook_subscription.added | New webhook configured in settings | +| Webhook Subscriptions Changed | webhook_subscription.changed | Existing webhook configuration altered | +| Webhook Subscriptions Removed | webhook_subscription.removed | Existing webhook deleted | + + +### Semantic Layer + +| Event Name | Event Type | Description | +| -------------------------------- | ----------------------------- | -----------------------| +| Semantic Layer Config Added | semantic_layer_config.added | Semantic Layer config added | +| Semantic Layer Config Changed | semantic_layer_config.changed | Semantic Layer config (not related to credentials) changed | +| Semantic Layer Config Removed | semantic_layer_config.removed | Semantic Layer config removed | +| Semantic Layer Credentials Added | semantic_layer_credentials.added | Semantic Layer credentials added | +| Semantic Layer Credentials Changed| semantic_layer_credentials.changed | Semantic Layer credentials changed. Does not trigger semantic_layer_config.changed| +| Semantic Layer Credentials Removed| semantic_layer_credentials.removed | Semantic Layer credentials removed | + +### Extended attributes + +| Event Name | Event Type | Description | +| -------------------------------- | ----------------------------- | -----------------------| +| Extended Attribute Added | extended_attributes.added | Extended attribute added to a project | +| Extended Attribute Changed | extended_attributes.changed | Extended attribute changed or removed | + + +### Account-scoped personal access token + +| Event Name | Event Type | Description | +| -------------------------------- | ----------------------------- | -----------------------| +| Account Scoped Personal Access Token Created | account_scoped_pat.created | An account-scoped PAT was created | +| Account Scoped Personal Access Token Deleted | account_scoped_pat.deleted | An account-scoped PAT was deleted | + +### IP restrictions + +| Event Name | Event Type | Description | +| -------------------------------- | ----------------------------- | -----------------------| +| IP Restrictions Toggled | ip_restrictions.toggled | IP restrictions feature enabled or disabled | +| IP Restrictions Rule Added | ip_restrictions.rule.added | IP restriction rule created | +| IP Restrictions Rule Changed | ip_restrictions.rule.changed | IP restriction rule edited | +| IP Restrictions Rule Removed | ip_restrictions.rule.removed | IP restriction rule deleted | + + + ## Searching the audit log You can search the audit log to find a specific event or actor, which is limited to the ones listed in [Events in audit log](#events-in-audit-log). The audit log successfully lists historical events spanning the last 90 days. You can search for an actor or event using the search bar, and then narrow your results using the time window. diff --git a/website/docs/docs/core/connect-data-platform/snowflake-setup.md b/website/docs/docs/core/connect-data-platform/snowflake-setup.md index 266840cafae..b692ba5c0d6 100644 --- a/website/docs/docs/core/connect-data-platform/snowflake-setup.md +++ b/website/docs/docs/core/connect-data-platform/snowflake-setup.md @@ -211,7 +211,7 @@ my-snowflake-db: -### SSO Authentication +### SSO authentication To use SSO authentication for Snowflake, omit a `password` and instead supply an `authenticator` config to your target. `authenticator` can be one of 'externalbrowser' or a valid Okta URL. @@ -332,7 +332,7 @@ my-snowflake-db: -### SSO Authentication +### SSO authentication To use SSO authentication for Snowflake, omit a `password` and instead supply an `authenticator` config to your target. `authenticator` can be one of 'externalbrowser' or a valid Okta URL. @@ -421,6 +421,30 @@ my-snowflake-db: Refer to the [Snowflake docs](https://docs.snowflake.com/en/sql-reference/parameters.html#label-allow-id-token) for info on how to enable this feature in your account. +### OAuth authorization + +To learn how to configure OAuth in Snowflake, refer to their [documentation](https://docs.snowflake.com/en/user-guide/oauth-snowflake-overview). Your Snowflake admin needs to generate an [OAuth token](https://community.snowflake.com/s/article/HOW-TO-OAUTH-TOKEN-GENERATION-USING-SNOWFLAKE-CUSTOM-OAUTH) for your configuration to work. + +Provide the OAUTH_REDIRECT_URI in Snowflake:`http://localhost:PORT_NUMBER`. For example, `http://localhost:8080`. + +Once your Snowflake admin has configured OAuth, add the following to your `profiles.yml` file: + +```yaml + +my-snowflake-db: + target: dev + outputs: + dev: + type: snowflake + account: [account id] + + # The following fields are retrieved from the Snowflake configuration + authenticator: oauth + oauth_client_id: [OAuth client id] + oauth_client_secret: [OAuth client secret] + token: [OAuth refresh token] +``` + ## Configurations The "base" configs for Snowflake targets are shown below. Note that you should also specify auth-related configs specific to the authentication method you are using as described above. diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md index 7b964b23b3d..f4ffbe37f35 100644 --- a/website/docs/docs/core/connect-data-platform/teradata-setup.md +++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md @@ -8,7 +8,7 @@ meta: github_repo: 'Teradata/dbt-teradata' pypi_package: 'dbt-teradata' min_core_version: 'v0.21.0' - cloud_support: Not Supported + cloud_support: Supported min_supported_version: 'n/a' slack_channel_name: '#db-teradata' slack_channel_link: 'https://getdbt.slack.com/archives/C027B6BHMT3' @@ -18,6 +18,7 @@ meta: Some core functionality may be limited. If you're interested in contributing, check out the source code in the repository listed in the next section. + import SetUpPages from '/snippets/_setup-pages-intro.md'; @@ -26,17 +27,17 @@ import SetUpPages from '/snippets/_setup-pages-intro.md'; ## Python compatibility -| Plugin version | Python 3.9 | Python 3.10 | Python 3.11 | -| -------------- | ----------- | ----------- | ------------ | -|1.0.0.x | ✅ | ❌ | ❌ -|1.1.x.x | ✅ | ✅ | ❌ -|1.2.x.x | ✅ | ✅ | ❌ -|1.3.x.x | ✅ | ✅ | ❌ -|1.4.x.x | ✅ | ✅ | ✅ -|1.5.x | ✅ | ✅ | ✅ -|1.6.x | ✅ | ✅ | ✅ -|1.7.x | ✅ | ✅ | ✅ -|1.8.x | ✅ | ✅ | ✅ +| Plugin version | Python 3.9 | Python 3.10 | Python 3.11 | Python 3.12 | +|----------------|------------|-------------|-------------|-------------| +| 1.0.0.x | ✅ | ❌ | ❌ | ❌ | +| 1.1.x.x | ✅ | ✅ | ❌ | ❌ | +| 1.2.x.x | ✅ | ✅ | ❌ | ❌ | +| 1.3.x.x | ✅ | ✅ | ❌ | ❌ | +| 1.4.x.x | ✅ | ✅ | ✅ | ❌ | +| 1.5.x | ✅ | ✅ | ✅ | ❌ | +| 1.6.x | ✅ | ✅ | ✅ | ❌ | +| 1.7.x | ✅ | ✅ | ✅ | ❌ | +| 1.8.x | ✅ | ✅ | ✅ | ✅ | ## dbt dependent packages version compatibility @@ -46,6 +47,8 @@ import SetUpPages from '/snippets/_setup-pages-intro.md'; | 1.6.7 | 1.6.7 | 1.1.1 | 1.1.1 | | 1.7.x | 1.7.x | 1.1.1 | 1.1.1 | | 1.8.x | 1.8.x | 1.1.1 | 1.1.1 | +| 1.8.x | 1.8.x | 1.2.0 | 1.2.0 | +| 1.8.x | 1.8.x | 1.3.0 | 1.3.0 | ### Connecting to Teradata diff --git a/website/docs/docs/dbt-cloud-apis/discovery-use-cases-and-examples.md b/website/docs/docs/dbt-cloud-apis/discovery-use-cases-and-examples.md index b99853cd547..e095374343f 100644 --- a/website/docs/docs/dbt-cloud-apis/discovery-use-cases-and-examples.md +++ b/website/docs/docs/dbt-cloud-apis/discovery-use-cases-and-examples.md @@ -25,7 +25,7 @@ For performance use cases, people typically query the historical or latest appli It’s helpful to understand how long it takes to build models (tables) and tests to execute during a dbt run. Longer model build times result in higher infrastructure costs and fresh data arriving later to stakeholders. Analyses like these can be in observability tools or ad-hoc queries, like in a notebook. - +
Example query with code diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md index 9178d1e6592..d9ce3bf4fd1 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md +++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md @@ -519,7 +519,7 @@ select * from {{ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'], group_by=[Dimension('metric_time')], limit=10, - order_by=[-'order_gross_profit']) + order_by=['-order_gross_profit']) }} ``` diff --git a/website/docs/docs/dbt-cloud-environments.md b/website/docs/docs/dbt-cloud-environments.md index 6efbd0e36f0..3aa54b4aaed 100644 --- a/website/docs/docs/dbt-cloud-environments.md +++ b/website/docs/docs/dbt-cloud-environments.md @@ -40,7 +40,7 @@ To create a new dbt Cloud development environment: To use the dbt Cloud IDE or dbt Cloud CLI, each developer will need to set up [personal development credentials](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud#get-started-with-the-cloud-ide) to your warehouse connection in their **Profile Settings**. This allows you to set separate target information and maintain individual credentials to connect to your warehouse. - + ## Deployment environment diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md index 010042ea49f..c3f0bbfbe06 100644 --- a/website/docs/docs/dbt-versions/release-notes.md +++ b/website/docs/docs/dbt-versions/release-notes.md @@ -18,6 +18,13 @@ Release notes are grouped by month for both multi-tenant and virtual private clo \* The official release date for this new format of release notes is May 15th, 2024. Historical release notes for prior dates may not reflect all available features released earlier this year or their tenancy availability. +## November 2024 +- **Fix**: This update improves [dbt Semantic Layer Tableau integration](/docs/cloud-integrations/semantic-layer/tableau) making query parsing more reliable. Some key fixes include: + - Error messages for unsupported joins between saved queries and ALL tables. + - Improved handling of queries when multiple tables are selected in a data source. + - Fixed a bug when an IN filter contained a lot of values. + - Better error messaging for queries that can't be parsed correctly. + ## October 2024 diff --git a/website/docs/docs/dbt-versions/upgrade-dbt-version-in-cloud.md b/website/docs/docs/dbt-versions/upgrade-dbt-version-in-cloud.md index 35758d46afd..875dbba0161 100644 --- a/website/docs/docs/dbt-versions/upgrade-dbt-version-in-cloud.md +++ b/website/docs/docs/dbt-versions/upgrade-dbt-version-in-cloud.md @@ -41,7 +41,7 @@ Configure your project to use a different dbt Core version than what's configure Each job in dbt Cloud can be configured to inherit parameters from the environment it belongs to. - + The example job seen in the screenshot above belongs to the environment "Prod". It inherits the dbt version of its environment as shown by the **Inherited from ENVIRONMENT_NAME (DBT_VERSION)** selection. You may also manually override the dbt version of a specific job to be any of the current Core releases supported by Cloud by selecting another option from the dropdown. diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md index 12d880d1543..7ab7f65796d 100644 --- a/website/docs/docs/deploy/ci-jobs.md +++ b/website/docs/docs/deploy/ci-jobs.md @@ -95,11 +95,15 @@ Automatically test your semantic nodes (metrics, semantic models, and saved quer To do this, add the command `dbt sl validate --select state:modified+` in the CI job. This ensures the validation of modified semantic nodes and their downstream dependencies. + + +#### Benefits - Testing semantic nodes in a CI job supports deferral and selection of semantic nodes. - It allows you to catch issues early in the development process and deliver high-quality data to your end users. - Semantic validation executes an explain query in the data warehouse for semantic nodes to ensure the generated SQL will execute. - For semantic nodes and models that aren't downstream of modified models, dbt Cloud defers to the production models +### Set up semantic validations in your CI job To learn how to set this up, refer to the following steps: 1. Navigate to the **Job setting** page and click **Edit**. diff --git a/website/docs/docs/deploy/deploy-environments.md b/website/docs/docs/deploy/deploy-environments.md index 088ecb0d841..dd9d066d545 100644 --- a/website/docs/docs/deploy/deploy-environments.md +++ b/website/docs/docs/deploy/deploy-environments.md @@ -29,7 +29,7 @@ We highly recommend using the `Production` environment type for the final, sourc To create a new dbt Cloud deployment environment, navigate to **Deploy** -> **Environments** and then click **Create Environment**. Select **Deployment** as the environment type. The option will be greyed out if you already have a development environment. - + ### Set as production environment diff --git a/website/docs/docs/deploy/job-commands.md b/website/docs/docs/deploy/job-commands.md index abea687f4db..09517262e93 100644 --- a/website/docs/docs/deploy/job-commands.md +++ b/website/docs/docs/deploy/job-commands.md @@ -28,7 +28,7 @@ Every job invocation automatically includes the [`dbt deps`](/reference/commands **Job outcome** — During a job run, the built-in commands are "chained" together. This means if one of the run steps in the chain fails, then the next commands aren't executed, and the entire job fails with an "Error" job status. - + ### Checkbox commands diff --git a/website/docs/docs/deploy/run-visibility.md b/website/docs/docs/deploy/run-visibility.md index 255882d066f..77db0e65fbb 100644 --- a/website/docs/docs/deploy/run-visibility.md +++ b/website/docs/docs/deploy/run-visibility.md @@ -33,7 +33,7 @@ An example of a completed run with a configuration for a [job completion trigger You can view or download in-progress and historical logs for your dbt runs. This makes it easier for the team to debug errors more efficiently. - + ### Lineage tab diff --git a/website/docs/reference/commands/version.md b/website/docs/reference/commands/version.md index 2ed14117828..3847b3cd593 100644 --- a/website/docs/reference/commands/version.md +++ b/website/docs/reference/commands/version.md @@ -13,7 +13,7 @@ The `--version` command-line flag returns information about the currently instal ## Versioning To learn more about release versioning for dbt Core, refer to [How dbt Core uses semantic versioning](/docs/dbt-versions/core#how-dbt-core-uses-semantic-versioning). -If using [versionless dbt Cloud](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless), then `dbt_version` uses the latest (continuous) release version. This also follows semantic versioning guidelines, using the `YYYY.xx.yy` format, where the year is the major version (for example, `2024.04.1234`) +If using [versionless dbt Cloud](/docs/dbt-versions/upgrade-dbt-version-in-cloud#versionless), then `dbt_version` uses the latest (continuous) release version. This also follows semantic versioning guidelines, using the `YYYY.MM.DD+` format. The year, month, and day represent the date the version was built (for example, `2024.10.28+996c6a8`). The suffix provides an additional unique identification for each build. ## Example usages diff --git a/website/docs/reference/global-configs/cache.md b/website/docs/reference/global-configs/cache.md index 1a74fef8d30..03f33286aa4 100644 --- a/website/docs/reference/global-configs/cache.md +++ b/website/docs/reference/global-configs/cache.md @@ -6,7 +6,7 @@ sidebar: "Cache" ### Cache population -At the start of runs, dbt caches metadata about all the objects in all the schemas where it might materialize resources (such as models). By default, dbt populates the cache with information on all schemas related to the project. +At the start of runs, dbt caches metadata about all the objects in all the schemas where it might materialize resources (such as models). By default, dbt populates the relational cache with information on all schemas related to the project. There are two ways to optionally modify this behavior: - `POPULATE_CACHE` (default: `True`): Whether to populate the cache at all. To skip cache population entirely, use the `--no-populate-cache` flag or `DBT_POPULATE_CACHE: False`. Note that this does not _disable_ the cache; missed cache lookups will run queries, and update the cache afterward. @@ -26,3 +26,11 @@ Or, to improve speed and performance while focused on developing Salesforce mode dbt --cache-selected-only run --select salesforce ``` + +### Logging relational cache events + +import LogLevel from '/snippets/_log-relational-cache.md'; + + diff --git a/website/docs/reference/global-configs/logs.md b/website/docs/reference/global-configs/logs.md index 972a731854d..682b9fc8393 100644 --- a/website/docs/reference/global-configs/logs.md +++ b/website/docs/reference/global-configs/logs.md @@ -137,11 +137,11 @@ You can use either of these parameters to ensure clean output that's compatible ### Logging relational cache events -The `LOG_CACHE_EVENTS` config allows detailed logging for [relational cache](/reference/global-configs/cache) events, which are disabled by default. +import LogLevel from '/snippets/_log-relational-cache.md'; -```text -dbt --log-cache-events compile -``` +relational cache} +/> ### Color diff --git a/website/docs/reference/macro-properties.md b/website/docs/reference/macro-properties.md index 91a616ded0d..69a66f308d9 100644 --- a/website/docs/reference/macro-properties.md +++ b/website/docs/reference/macro-properties.md @@ -19,6 +19,7 @@ macros: [description](/reference/resource-properties/description): [docs](/reference/resource-configs/docs): show: true | false + [meta](/reference/resource-configs/meta): {} arguments: - name: [type](/reference/resource-properties/argument-type): diff --git a/website/docs/reference/resource-configs/meta.md b/website/docs/reference/resource-configs/meta.md index 53a4f77184e..e1542bdbc82 100644 --- a/website/docs/reference/resource-configs/meta.md +++ b/website/docs/reference/resource-configs/meta.md @@ -56,7 +56,7 @@ See [configs and properties](/reference/configs-and-properties) for details. ```yml version: 2 -sources: +[sources](/reference/source-properties): - name: model_name config: meta: {} @@ -110,7 +110,7 @@ version: 2 snapshots: - name: snapshot_name config: - meta: {} + [meta](/reference/snapshot-properties): {} columns: - name: column_name @@ -147,7 +147,7 @@ The `meta` config is not currently supported for analyses. ```yml version: 2 -macros: +[macros](/reference/macro-properties): - name: macro_name meta: {} @@ -287,7 +287,7 @@ models: ```yml version: 2 -sources: +[sources](/reference/source-properties): - name: salesforce tables: diff --git a/website/docs/reference/resource-configs/snowflake-configs.md b/website/docs/reference/resource-configs/snowflake-configs.md index b95b79241ba..7bef180e3d3 100644 --- a/website/docs/reference/resource-configs/snowflake-configs.md +++ b/website/docs/reference/resource-configs/snowflake-configs.md @@ -678,3 +678,27 @@ Per the [Snowflake documentation](https://docs.snowflake.com/en/sql-reference/in >- DDL operations. >- DML operations (for tables only). >- Background maintenance operations on metadata performed by Snowflake. + + + +## Pagination for object results + +By default, when dbt encounters a schema with up to 100,000 objects, it will paginate the results from `show objects` at 10,000 per page for up to 10 pages. + +Environments with more than 100,000 objects in a schema can customize the number of results per page and the page limit using the following [flags](/reference/global-configs/about-global-configs) in the `dbt_project.yml`: + +- `list_relations_per_page` — The number of relations on each page (Max 10k as this is the most Snowflake allows). +- `list_relations_page_limit` — The maximum number of pages to include in the results. + +For example, if you wanted to include 10,000 objects per page and include up to 100 pages (1 million objects), configure the flags as follows: + + +```yml + +flags: + list_relations_per_page: 10000 + list_relations_page_limit: 100 + +``` + + \ No newline at end of file diff --git a/website/docs/reference/resource-properties/constraints.md b/website/docs/reference/resource-properties/constraints.md index 63582974040..6ba20db090f 100644 --- a/website/docs/reference/resource-properties/constraints.md +++ b/website/docs/reference/resource-properties/constraints.md @@ -65,7 +65,7 @@ models: - type: unique - type: foreign_key to: ref('other_model_name') - to_columns: other_model_column + to_columns: [other_model_column] - type: ... ``` diff --git a/website/docs/reference/resource-properties/schema.md b/website/docs/reference/resource-properties/schema.md index 017d93e3235..6b5ba66ff8f 100644 --- a/website/docs/reference/resource-properties/schema.md +++ b/website/docs/reference/resource-properties/schema.md @@ -10,7 +10,7 @@ datatype: schema_name ```yml version: 2 -sources: +[sources](/reference/source-properties): - name: database: schema: @@ -25,7 +25,7 @@ sources: ## Definition The schema name as stored in the database. -This parameter is useful if you want to use a source name that differs from the schema name. +This parameter is useful if you want to use a [source](/reference/source-properties) name that differs from the schema name. :::info BigQuery terminology diff --git a/website/snippets/_log-relational-cache.md b/website/snippets/_log-relational-cache.md new file mode 100644 index 00000000000..4249030f94e --- /dev/null +++ b/website/snippets/_log-relational-cache.md @@ -0,0 +1,5 @@ +

The `LOG_CACHE_EVENTS` config allows detailed logging for {props.event}, which are disabled by default.

+ +```text +dbt --log-cache-events compile +``` diff --git a/website/snippets/_sl-partner-links.md b/website/snippets/_sl-partner-links.md index aaefcc77747..7d08323239b 100644 --- a/website/snippets/_sl-partner-links.md +++ b/website/snippets/_sl-partner-links.md @@ -22,6 +22,20 @@ The following tools integrate with the dbt Semantic Layer: body="Connect to Microsoft Excel to query metrics and collaborate with your team. Available for Excel Desktop or Excel Online." icon="excel"/> +
+ + + + +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/website/static/img/icons/white/dot-ai.svg b/website/static/img/icons/white/dot-ai.svg new file mode 100644 index 00000000000..d0223968caa --- /dev/null +++ b/website/static/img/icons/white/dot-ai.svg @@ -0,0 +1,33441 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file