From 3c32c92c0b55c6fbfdcb1af06f151ee54ed180c6 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 22 Nov 2023 07:32:41 -0500 Subject: [PATCH 001/204] adjust docs config page --- .../docs/reference/resource-configs/docs.md | 50 +++++++++++++++++-- 1 file changed, 47 insertions(+), 3 deletions(-) diff --git a/website/docs/reference/resource-configs/docs.md b/website/docs/reference/resource-configs/docs.md index c5e35dd64f4..b188753010f 100644 --- a/website/docs/reference/resource-configs/docs.md +++ b/website/docs/reference/resource-configs/docs.md @@ -20,6 +20,9 @@ default_value: {show: true} +You can configure `docs` behavior for many resources at once by setting in `dbt_project.yml`. You can also use the `docs` config in `properties.yaml` files, to set or override documentation behaviors for specific resources: + + @@ -35,6 +38,18 @@ models: + + +```yml +models: + [](/reference/resource-configs/resource-path): + +docs: + show: true | false + +``` + + + @@ -45,6 +60,20 @@ This property is not implemented for sources. +You can use the docs property in YAML files, including the `dbt_project.yml`: + + + +```yml +seeds: + [](/reference/resource-configs/resource-path): + +docs: + show: true | false + +``` + + + ```yml @@ -61,6 +90,20 @@ seeds: +You can use the docs property in YAML files, including the `dbt_project.yml`: + + + +```yml +snapshots: + [](/reference/resource-configs/resource-path): + +docs: + show: true | false + +``` + + + ```yml @@ -77,6 +120,9 @@ snapshots: +You can use the docs property in YAML files, _except_ in `dbt_project.yml`. Refer to [Analysis properties](/reference/analysis-properties) for more info. + + ```yml @@ -93,9 +139,7 @@ analyses: - +You can use the docs property in YAML files, _except_ in `dbt_project.yml`. Refer to [Macro properties](/reference/macro-properties) for more info. From 68626d13b1706f017fab73d024c5434ab4dc6899 Mon Sep 17 00:00:00 2001 From: Victor Rgez <52705438+victorrgez@users.noreply.github.com> Date: Sun, 26 Nov 2023 12:16:57 +0100 Subject: [PATCH 002/204] Update init.md Clarifications about running dbt init --profile --- website/docs/reference/commands/init.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/reference/commands/init.md b/website/docs/reference/commands/init.md index e9cc2ccba4e..7c5448f7482 100644 --- a/website/docs/reference/commands/init.md +++ b/website/docs/reference/commands/init.md @@ -19,11 +19,11 @@ Then, it will: -When using `dbt init` to initialize your project, include the `--profile` flag to specify an existing `profiles.yml` as the `profile:` key to use instead of creating a new one. For example, `dbt init --profile`. +When using `dbt init` to initialize your project, include the `--profile` flag to specify an existing `profiles.yml` as the `profile:` key to use instead of creating a new one. For example, `dbt init --profile profile_name`. -If the profile does not exist in `profiles.yml` or the command is run inside an existing project, the command raises an error. +If the profile does not exist in `profiles.yml` or the command is run inside an existing project (that is, if `dbt_project.yml` already exists), the command raises an error. From d71d7171a63eb5c87af4b2c74d150daa255871fa Mon Sep 17 00:00:00 2001 From: Victor Rgez <52705438+victorrgez@users.noreply.github.com> Date: Sun, 26 Nov 2023 12:27:03 +0100 Subject: [PATCH 003/204] change version versionblock --- website/docs/reference/commands/init.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/commands/init.md b/website/docs/reference/commands/init.md index 7c5448f7482..2f7be339e39 100644 --- a/website/docs/reference/commands/init.md +++ b/website/docs/reference/commands/init.md @@ -17,7 +17,7 @@ Then, it will: - Create a new folder with your project name and sample files, enough to get you started with dbt - Create a connection profile on your local machine. The default location is `~/.dbt/profiles.yml`. Read more in [configuring your profile](/docs/core/connect-data-platform/connection-profiles). - + When using `dbt init` to initialize your project, include the `--profile` flag to specify an existing `profiles.yml` as the `profile:` key to use instead of creating a new one. For example, `dbt init --profile profile_name`. From 4899176188f93c5cd9f6aad0b5a846513ca06403 Mon Sep 17 00:00:00 2001 From: Victor Rgez <52705438+victorrgez@users.noreply.github.com> Date: Mon, 27 Nov 2023 20:23:37 +0100 Subject: [PATCH 004/204] Undo extra changes init.md Resolved comments on PR --- website/docs/reference/commands/init.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/reference/commands/init.md b/website/docs/reference/commands/init.md index 2f7be339e39..7211fc6fb15 100644 --- a/website/docs/reference/commands/init.md +++ b/website/docs/reference/commands/init.md @@ -17,13 +17,13 @@ Then, it will: - Create a new folder with your project name and sample files, enough to get you started with dbt - Create a connection profile on your local machine. The default location is `~/.dbt/profiles.yml`. Read more in [configuring your profile](/docs/core/connect-data-platform/connection-profiles). - + When using `dbt init` to initialize your project, include the `--profile` flag to specify an existing `profiles.yml` as the `profile:` key to use instead of creating a new one. For example, `dbt init --profile profile_name`. -If the profile does not exist in `profiles.yml` or the command is run inside an existing project (that is, if `dbt_project.yml` already exists), the command raises an error. +If the profile does not exist in `profiles.yml` or the command is run inside an existing project, the command raises an error. From 2d1bcf55ce6a6572a8932fe03b5f7fb1d386a81f Mon Sep 17 00:00:00 2001 From: Talla Date: Tue, 5 Dec 2023 07:27:32 +0530 Subject: [PATCH 005/204] Updated as per dbt-teradata 1.7.0 --- .../docs/docs/core/connect-data-platform/teradata-setup.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md index 1a30a1a4a54..4f467968716 100644 --- a/website/docs/docs/core/connect-data-platform/teradata-setup.md +++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md @@ -38,6 +38,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md'; |1.4.x.x | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |1.5.x | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |1.6.x | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ +|1.7.x | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ ## dbt dependent packages version compatibility @@ -45,6 +46,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md'; |--------------|------------|-------------------|----------------| | 1.2.x | 1.2.x | 0.1.0 | 0.9.x or below | | 1.6.7 | 1.6.7 | 1.1.1 | 1.1.1 | +| 1.7.0 | 1.7.3 | 1.1.1 | 1.1.1 | ### Connecting to Teradata @@ -172,6 +174,8 @@ For using cross DB macros, teradata-utils as a macro namespace will not be used, | Cross-database macros | type_string | :white_check_mark: | custom macro provided | | Cross-database macros | last_day | :white_check_mark: | no customization needed, see [compatibility note](#last_day) | | Cross-database macros | width_bucket | :white_check_mark: | no customization +| Cross-database macros | generate_series | :white_check_mark: | custom macro provided +| Cross-database macros | date_spine | :white_check_mark: | no customization #### examples for cross DB macros From 54b430a9f32f21139de64d9b1cb2a5da10abda4c Mon Sep 17 00:00:00 2001 From: Przemek Denkiewicz Date: Tue, 5 Dec 2023 11:25:52 +0100 Subject: [PATCH 006/204] Add oauth_console authentication to Starburst/Trino --- .../core/connect-data-platform/trino-setup.md | 33 +++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/trino-setup.md b/website/docs/docs/core/connect-data-platform/trino-setup.md index a7dc658358f..354e95ef03d 100644 --- a/website/docs/docs/core/connect-data-platform/trino-setup.md +++ b/website/docs/docs/core/connect-data-platform/trino-setup.md @@ -30,7 +30,7 @@ The parameters for setting up a connection are for Starburst Enterprise, Starbur ## Host parameters -The following profile fields are always required except for `user`, which is also required unless you're using the `oauth`, `cert`, or `jwt` authentication methods. +The following profile fields are always required except for `user`, which is also required unless you're using the `oauth`, `oauth_console`, `cert`, or `jwt` authentication methods. | Field | Example | Description | | --------- | ------- | ----------- | @@ -71,6 +71,7 @@ The authentication methods that dbt Core supports are: - `jwt` — JSON Web Token (JWT) - `certificate` — Certificate-based authentication - `oauth` — Open Authentication (OAuth) +- `oauth_console` — Open Authentication (OAuth) with authentication URL printed to the console - `none` — None, no authentication Set the `method` field to the authentication method you intend to use for the connection. For a high-level introduction to authentication in Trino, see [Trino Security: Authentication types](https://trino.io/docs/current/security/authentication-types.html). @@ -85,6 +86,7 @@ Click on one of these authentication methods for further details on how to confi {label: 'JWT', value: 'jwt'}, {label: 'Certificate', value: 'certificate'}, {label: 'OAuth', value: 'oauth'}, + {label: 'OAuth (console)', value: 'oauth_console'}, {label: 'None', value: 'none'}, ]} > @@ -269,7 +271,34 @@ sandbox-galaxy: host: bunbundersders.trino.galaxy-dev.io catalog: dbt_target schema: dataders - port: 433 + port: 443 +``` + + + + + +The only authentication parameter to set for OAuth 2.0 is `method: oauth_console`. If you're using Starburst Enterprise or Starburst Galaxy, you must enable OAuth 2.0 in Starburst before you can use this authentication method. + +For more information, refer to both [OAuth 2.0 authentication](https://trino.io/docs/current/security/oauth2.html) in the Trino docs and the [README](https://github.com/trinodb/trino-python-client#oauth2-authentication) for the Trino Python client. + +The only difference between `oauth_console` and `oauth` is that in the latter a browser is automatically opened with authentication URL and in `oauth_console` URL is printed to the console. + +It's recommended that you install `keyring` to cache the OAuth 2.0 token over multiple dbt invocations by running `python -m pip install 'trino[external-authentication-token-cache]'`. The `keyring` package is not installed by default. + +#### Example profiles.yml for OAuth + +```yaml +sandbox-galaxy: + target: oauth_console + outputs: + oauth: + type: trino + method: oauth_console + host: bunbundersders.trino.galaxy-dev.io + catalog: dbt_target + schema: dataders + port: 443 ``` From 3096821d42ab367d1bcb1e3ddfd5c10973257fb1 Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Wed, 13 Dec 2023 16:29:39 -0800 Subject: [PATCH 007/204] Partial parsing in dbt Cloud --- .../74-Dec-2023/partial-parsing.md | 15 ++++++ .../release-notes/75-Nov-2023/repo-caching.md | 2 +- website/docs/reference/parsing.md | 6 ++- website/snippets/_cloud-environments-info.md | 50 +++++++++++------- .../docs/deploy/example-account-settings.png | Bin 0 -> 61502 bytes .../img/docs/deploy/example-repo-caching.png | Bin 47561 -> 0 bytes 6 files changed, 52 insertions(+), 21 deletions(-) create mode 100644 website/docs/docs/dbt-versions/release-notes/74-Dec-2023/partial-parsing.md create mode 100644 website/static/img/docs/deploy/example-account-settings.png delete mode 100644 website/static/img/docs/deploy/example-repo-caching.png diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/partial-parsing.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/partial-parsing.md new file mode 100644 index 00000000000..eb224d5b845 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/partial-parsing.md @@ -0,0 +1,15 @@ +--- +title: "New: Native support for partial parsing" +description: "December 2023: For faster run times with your dbt invocations, configure dbt Cloud to parse only the changed files in your project." +sidebar_label: "New: Native support for partial parsing" +sidebar_position: 09 +tags: [Dec-2023] +date: 2023-12-14 +--- + +By default, dbt parses all the files in your project at the beginning of every dbt invocation. Depending on the size of your project, this operation can take a long time to complete. With the new partial parsing feature in dbt Cloud, you can reduce the time it takes for dbt to parse your project. When enabled, dbt Cloud parses only the changed files in your project instead of parsing all the project files. As a result, your dbt invocations will take significantly less time to run. + +To learn more, refer to [Partial parsing](/docs/deploy/deploy-environments#partial-parsing). + + + diff --git a/website/docs/docs/dbt-versions/release-notes/75-Nov-2023/repo-caching.md b/website/docs/docs/dbt-versions/release-notes/75-Nov-2023/repo-caching.md index 7c35991e961..eff15e96cfd 100644 --- a/website/docs/docs/dbt-versions/release-notes/75-Nov-2023/repo-caching.md +++ b/website/docs/docs/dbt-versions/release-notes/75-Nov-2023/repo-caching.md @@ -11,4 +11,4 @@ Now available for dbt Cloud Enterprise plans is a new option to enable Git repos To learn more, refer to [Repo caching](/docs/deploy/deploy-environments#git-repository-caching). - \ No newline at end of file + \ No newline at end of file diff --git a/website/docs/reference/parsing.md b/website/docs/reference/parsing.md index 1a68ba0d476..8205f93d013 100644 --- a/website/docs/reference/parsing.md +++ b/website/docs/reference/parsing.md @@ -41,7 +41,7 @@ The [`PARTIAL_PARSE` global config](/reference/global-configs/parsing) can be en Parse-time attributes (dependencies, configs, and resource properties) are resolved using the parse-time context. When partial parsing is enabled, and certain context variables change, those attributes will _not_ be re-resolved, and are likely to become stale. -In particular, you may see **incorrect results** if these attributes depend on "volatile" context variables, such as [`run_started_at`](/reference/dbt-jinja-functions/run_started_at), [`invocation_id`](/reference/dbt-jinja-functions/invocation_id), or [flags](/reference/dbt-jinja-functions/flags). These variables are likely (or even guaranteed!) to change in each invocation. We _highly discourage_ you from using these variables to set parse-time attributes (dependencies, configs, and resource properties). +In particular, you may see **incorrect results** if these attributes depend on "volatile" context variables, such as [`run_started_at`](/reference/dbt-jinja-functions/run_started_at), [`invocation_id`](/reference/dbt-jinja-functions/invocation_id), or [flags](/reference/dbt-jinja-functions/flags). These variables are likely (or even guaranteed!) to change in each invocation. dbt Labs _strongly discourages_ you from using these variables to set parse-time attributes (dependencies, configs, and resource properties). Starting in v1.0, dbt _will_ detect changes in environment variables. It will selectively re-parse only the files that depend on that [`env_var`](/reference/dbt-jinja-functions/env_var) value. (If the env var is used in `profiles.yml` or `dbt_project.yml`, a full re-parse is needed.) However, dbt will _not_ re-render **descriptions** that include env vars. If your descriptions include frequently changing env vars (this is highly uncommon), we recommend that you fully re-parse when generating documentation: `dbt --no-partial-parse docs generate`. @@ -51,7 +51,9 @@ If certain inputs change between runs, dbt will trigger a full re-parse. The res - `dbt_project.yml` content (or `env_var` values used within) - installed packages - dbt version -- certain widely-used macros, e.g. [builtins](/reference/dbt-jinja-functions/builtins) overrides or `generate_x_name` for `database`/`schema`/`alias` +- certain widely-used macros (for example, [builtins](/reference/dbt-jinja-functions/builtins), overrides, or `generate_x_name` for `database`/`schema`/`alias`) + +If you're triggering [CI](/docs/deploy/continuous-integration) job runs, the partial parsing benefits are not available on a new pull request (PR) or new branch. However, they are available on subsequent commits to that new PR or branch. If you ever get into a bad state, you can disable partial parsing and trigger a full re-parse by setting the `PARTIAL_PARSE` global config to false, or by deleting `target/partial_parse.msgpack` (e.g. by running `dbt clean`). diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 6e096b83750..e89a5e0abc2 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -34,24 +34,6 @@ Both development and deployment environments have a section called **General Set - If you select a current version with `(latest)` in the name, your environment will automatically install the latest stable version of the minor version selected. ::: -### Git repository caching - -At the start of every job run, dbt Cloud clones the project's Git repository so it has the latest versions of your project's code and runs `dbt deps` to install your dependencies. - -For improved reliability and performance on your job runs, you can enable dbt Cloud to keep a cache of the project's Git repository. So, if there's a third-party outage that causes the cloning operation to fail, dbt Cloud will instead use the cached copy of the repo so your jobs can continue running as scheduled. - -dbt Cloud caches your project's Git repo after each successful run and retains it for 8 days if there are no repo updates. It caches all packages regardless of installation method and does not fetch code outside of the job runs. - -To enable Git repository caching, select **Account settings** from the gear menu and enable the **Repository caching** option. - - - -:::note - -This feature is only available on the dbt Cloud Enterprise plan. - -::: - ### Custom branch behavior By default, all environments will use the default branch in your repository (usually the `main` branch) when accessing your dbt code. This is overridable within each dbt Cloud Environment using the **Default to a custom branch** option. This setting have will have slightly different behavior depending on the environment type: @@ -92,3 +74,35 @@ schema: dbt_alice threads: 4 ``` +### Git repository caching + +At the start of every job run, dbt Cloud clones the project's Git repository so it has the latest versions of your project's code and runs `dbt deps` to install your dependencies. + +For improved reliability and performance on your job runs, you can enable dbt Cloud to keep a cache of the project's Git repository. So, if there's a third-party outage that causes the cloning operation to fail, dbt Cloud will instead use the cached copy of the repo so your jobs can continue running as scheduled. + +dbt Cloud caches your project's Git repo after each successful run and retains it for 8 days if there are no repo updates. It caches all packages regardless of installation method and does not fetch code outside of the job runs. + +To enable Git repository caching, select **Account settings** from the gear menu and enable the **Repository caching** option. + + + +:::note + +This feature is only available on the dbt Cloud Enterprise plan. + +::: + +### Partial parsing + +At the start of every dbt invocation, dbt reads all the files in your project, extracts information, and constructs an internal manifest containing every object (model, source, macro, and so on). Among other things, it uses the `ref()`, `source()`, and `config()` macro calls within models to set properties, infer dependencies, and construct your project's DAG. When dbt finishes parsing your project, it stores the internal manifest in a file called `partial_parse.msgpack`. + +Parsing projects can be time-consuming, especially for large projects (for example, a project with hundreds of models and thousands of files). To reduce the time it takes dbt to parse your project, use the partial parsing feature in dbt Cloud for your environment. When enabled, dbt Cloud uses the `partial_parse.msgpack` file to determine which files have changed (if any) since the project was last parsed. Then, instead of parsing all project files, it _only_ parses the changed files or the files related to those changes. + +The partial parsing feature does have some known limitations. Refer to [Known limitations](/reference/parsing#known-limitations) to learn more about them. + +To enable, select **Account settings** from the gear menu and enable the **Partial parsing** option. + + + + + diff --git a/website/static/img/docs/deploy/example-account-settings.png b/website/static/img/docs/deploy/example-account-settings.png new file mode 100644 index 0000000000000000000000000000000000000000..12b8d9bc49f15bbe2e766d064b26617bd11c7177 GIT binary patch literal 61502 zcmd43cT`hb*FK8Jqn@JzR#2oUi1Z>zZ$}|i0qKT%REl&6y~ak-2pFo=NGDVc1PBnK zBAtMA2)*~-Ykmt+KYhRVcgMJ6+&}If!{JEw&R%QHHPYuvE?7x*l|G{YE zWklQN3~EC;-g+Q+y@$_N;U*Hi=J78%qD`7`7TN}00m(TGXMbGH$-x4X0|t*Z$hpzf4_zU-L$<fWMFX9}~XpbL{7!lc#={!~dS50@jWHSJD2yz~59yk>=Oc zaR1MK;IBci|6H|ag$9*E>=5nKpOawe*I-~^_+2v5$w5Ew_d94y{Gwq}mRZTRDchQP z6~FItNZ0h^9_Oj)o|#AF>e)b)U(;o!I)+-G+CCggF?f1%^46}xvmY-NUFPQ3E5}QB zEPnmkS|?Q(U$XLS@qbj}i=Mqo)<}*Ei%g()U3}eMiMMJE^V#N3TbBYM(yR-OwHY?` zv!Q`qu18LU!J8|{9%QU%ceU%`)(>ZCmLn^TOCGw*O{$Lh8u?P20X#yjeP-#r^z3z?5SFGH&=iHcqJsv&K zM7VHZV1K9P!2JigIdhCMkKLb4;tVzI?%8SCbu7}-e^bO}Hybd`qJrSEN~`0#^Lq21 zgNaC>1d~prSOcys9wF=+i>RNo^7S$EvB=9;4>xU+VXP+#kE>jC__|YPRNBCNumjR5 zef^qXr2JcSjAuQeI5V6h&Zh$Y=b;o?j4?I)2dZY_HYZm6pjB<@af_0V&upZq4t(4& zbBJ~eGu~UPL(8Z8HtRHjF}XmbDD&@nf41lT8EL74;)nXt+!_j5X9K53e23b;d}oEQ zo4QHxU6b&^XI|LbKlASVlwp$goSaf(%$RLF0TmtV^?5pTxA^g=17I$_^*wr_|OsQ zQ~Cw7rBXXoGFrG1QC+%qTL(_XP8UF`0*@NxZ)~F zmDI0(w7hL06#lt?=iltY4Nvp+!CYlx<2v$Sqm2JFw_^ErOgy!l%#bs@#*Mt3kuoy9 zj27<8qHnod7EzSL4l5VUk?=+G9R`f)mX#Z6N0El34iUF8igs1BjfcqG64>c5vh9pm zy_+LFxH5R#L?^yd-}(bx!cnU8W2tXwza+yc4{eGMl=|Bs`?st4U$YolATa}rOw_~g zm%#w~deX5FyM7w-U~||hv|i!d-LIk0IkPruVo(0U`$Fqpi`)8p)sNUL*EX1>8GBi4 zE$fIvc?<7qcBoO>x}EsXR0m_duyFn^EJZL>H>7f#TT?iorDp&FA;X)k@$ryCEOn_E zDZ1OOF^=#h%5*ijUC2eRMKFY30vX{`T9|9C8Uo**+c|21d?d}t7rWf=sekXSH#6(p zG8O}KT1c3=LWh+d=LrmNjBxdGg3}5XbGLMuHhIOt#aY)zr&hr1Tx0gL+bob%o%!@e zBY(~a3fMEMibr^nk*i-#COWFpdcCMqop7xR9$TvGz6Ms)`HEu7Sy6i2OEr!@^tg?W z8n$_>+4$vQVq9(_vdC`2N@&3Q%FRg>ESYGm^T;v!YeaZ-#RRVlrm(S+M@9yPoVho%TTuum#4+TNCFx*+Yu+iv-IH0sj>ebomaEWgUkl;m>M z=0-L7>fe*xwjYSQ%Qgf#DX&Sv6A$BMJKbh9s=3nN&c4w>38;dP0Tw^TXooNS#SU@y<4J)5^et#aYH1(MsR|+TswnS+$m8ZUy+)FFR+Z; zpK$(gZ3-#CTxj1;(a)1-QlL5`X9XS29vwMa;Gqvq#VF$chIZ%7%`rAG2W-}<{T)3s z!BlKnosbz*(t9F=O^uS^F61;QYu*4&YhqrI@*WO{P%UbCv#tdWProIq4aEr4x9o9i z(uq=DIUTc}Q&5%=E%nbUx23wZ{oFcmqrZE*J#|c3no&Xtep53yzs-7Xr-9jPU^%(W zyWKdry;Egkg+{J?XP?c=>CiR%XERh^y1F zxg(ewzxpKV>o=;|=(jh>j9j+Nk#g-KT`AI&Gf?=(jN6xxqvdf$qRK-3ZtPRkkJwc5 zKjxJa;ku?Xj6Lj>0{Jj=q6zQXgl1lEtq;2pZJuc`^{&2^^ycVFR;PVN1vP`z2X>gt zbfq)u z!KNPTBg(;gm0w%9`t_+$qLEjr`AdJI0qEK5pv_#0eWD6Ac> znq#(9=K_1zroL?zNzbfrtE#$gKBcf7Ou%nBy}ce&rOgA)(itSf!l6aSl34W#AeOnLZ(DN1|IO(39aS zB?$9@IHf8+@fjzg*K6_Zg>WIP)cFFP-HoZucPvWjcPx0qF)_E|y@zLI%GXSoyq9rp zCW{VPrVFH?9`)1|U;4!{0-mho^E{`#T!WK5-?{Lo@7i4Rs9;*?tJm4|=(BF~ z;&0xQGn|>t*6!7J*_LXTf4E%Gjlns$`?G3X(WXAd@@TZI<)uws|0Jb^^vS>ai+MgR65yh} zZ8Dhm8%bTfoz45{1LgL%yGoa6g`J)jx>@W>>r72?w=^?Ii?>fcDi4YsDe&~6iFO|? z-hX3p46;3_zE5=6b$*;Kjc`_of~J(OEOsN})}`H7<=|5Ay?T*Va^dO5s5<7g1QmP| zFrFv8EvI}ZP0d{5XGt_gdUK)MZx**Qvtbnr4~4Ucr5lX3=Ho#rzifvZMwqOEc1x*R(ldj#ErJuVd&4(%ImGo79 zyZ*jrQ!1eSYmIeuk*T2MLq98Fd!+)K%xghR$Nrp|c#M*U*IEXm}g<7p|_AO zQUWYrb(CdPo~0Kp&yJdo=QV_OoypQ?=GNNQiR2`a-B09GIRVg>NGsb_=;&D1Wi_eC zVR@A>r_|&fd@}W)Y(AHn6@*GR>|oIlW2L4xx`yEU(*;Ev&dZ6}`pMpCG1AWYFcdBR zp4F+&Kkvo}+g2Io9Gr*Rq?&eoeob8IMthqLc0Q(+QFt5QNBC- z<kn9+D#y{TmV-gZJ=o>#B4ZOs_xrC&39ZU0L^xoc1?G)4K+1b5(TbnB0!E$rWspPEC+Kgytn|Srj=QYf0 zt#RZ*vR~$^GYy)_C)W+=^fp5rlM*N_~py2YjX??sfsURr z=`McpCAMe*4O7j5RKm?JMxdi|(a4v0Y7M6nCTZVHVV!4a;<#&ZJjB02|J<2NB@_QH zCq7Cq=>IaUkBCv)9<@w7$wf)t38)}~q-S?!cYt|fivZ-6kFnoi{4&wZjd|ad(sY9D zgr9MFjD~>)Iy>{S91R+NdpSX}eIYY5H>mI2#I4n35%T&cicVn=dD+^uY^3oVX>-wn z+}Nvsi*(T<$7G#=QZ>rQ8u6SOQKP|G5z`9lJB8Yx9yLWa&lK+Vy8?1fValW7#n zZO)+0ke-;62yN5=oQ$_|^(Fi0JbIMc&KWT>d|rD$gUZIrr6z4xY!*{<3OPOWdAeWv>52$xjhZ@I>Ecj=S?5-P zbAT5#r8lzf^2UuDD9KpY?cG!fT*#<-PGQ=T*5?~)lmkxuVxARL>EY)!#3UY~X<(^F z<>LizXP~CL^G8YjV_cn?%04tW2c>Wc+HqeRoa>%$PHLfkaI#m&dc8R^sPM*K!DQyN zbiy@uayY%1>z#pF%i9m&2OEO(nP9ZdbIw35N$tgok#c+77F2@vm!i@e4KJ;#8}2i3 zi*y!yo0{HEthJn@TLN;%SOfpiTEV>Cz`P*ios=+?Ah@T&H5Fr!pRFCrr#W#da#pvwix@dOzV4V zR+Mo^!lUjOze$uyK6)5!?>K*jrqnU%u%mB0re+SL?;<;2_kCRt?n+3Pyd$gPP)BzbYX zG)R~>B$NK5I0D=8&<71|nD2RR*X4|dekX@(eAN_tlVQVEnBpGHnXG&?-(s5Es4^=f`ktd3WDYDW8c>M`zza9WV_ zoDZRIswwI2nT`>++#(j8#jliS&-Tnq@@bL`74b>Aqt42i`^+1RqM4*Dwtx3Okr5!R z*Jsk=u=Hl$gSQ1?oLq?8k{5X*}B1LV|2M!r%aiMv0nGkaMa4IGx^abBf|9&+m6iDHR3$4JmkJJRkx=&) zTh$Kv9!1T0#rZ6W02>#!KxbzQWJB{|kO1bt8kiF-9H9tyKt6%f&lx_|7G+SHhO2Bg zwf`U_!~#%SzLZ_;tzFc`!^O&dIReo9AZ?Rr7YDaFx~ZrO=K^w6jp}dVdrf@4KO(n^ zN$$Pt@E5b=TP`?+5-xG=JKHi$Q6j{P!BswRZ+S7PdBge;US?*dvgq?PzW$tf3Hv7T zCnZRhp1&>;CaSZ`3k7%2{qooctk)oiOfm3gN%~ zwokvbYVpb6&8n1eg&(te2oHq}2evG_UHCx?txseKi_l$W*q(E1}M1loAn0CDjJoO=fn*E+)eeyIfW!it!J@$ZPqFM4j3>D_Ku*dDK29zN*j3X(=Z?CxEVfU1irl{vi zaEE4UzyPCChgA!sbYx5=TT_Or&|FD*bMAI849#Ux2E7NIizJ+k3~v`?J^Q;x0`AeF zJA=?`F}*Kq#h*;N+t7$4b|LgI3DSBqwKj^S^gg_{MkH1zBfP$~m+^9~QKtadEnR&& zZPm8!Q(+s_ivHi2SDhQ)v2?Sny#t61LSl$!7uRx-G1<$A69CgzFqo- zBa&B0r=zVyWZJsUnkT6|ExT&CP5xGAd*|LI)sjc5J}ijG$w5DvVaH2=)9!l?y`B{H z`?P`2Z#847IC?`)~5KUY-f3Kj>z*vttQhrH7qS%%Cwo4XkN0^MX6ph(AV!Fkt8G}4$3dyiT_wwLb|3swdu8YOJ z6k+C;<_`1K*b9eGQT#hP_}JRX!g-WkWCErlIIKutSZAW{5jx_4*6IF zHzsF~b^I|)d3jyW$Nt`ye{FAV8~y$LE&KwO?}c5PT5cfZP*L3He!M&R4TO1Jk>vyqILDW&SY($&;X9W%kz}6*n@U=n4=Gy^i>?)J68( z{F-&DDoB$0uOr?sJ^n!fqxgMB6d`wLUUsoSf2{tA!nOZ+@qf-31O;Bd4q^x5*9BR6 zrBKKZZE(NP2{lzvsJ^!x55N^8<)#a%_v~^#(T|cvX1mIq@`lb{+zvEF{+Iz{<%m-O zn~zXE{PSvF1#Qq}+EI{dI2t_Lsh;okVITUpMI~O1v_NueThu;fqVS6lUd?c0d>)T5 zgCBDZeY|;b^?JPI#01lQ8_qSuxB}Q*q)Y4wr8;)eCHxoZ0zvhVOjKz&Iyw#Zc8_ZD z8HK?XUvEX_^}RhO;5Pf^Ot<&yD0Ff*72~Gg^(@wH9=+Z(LOt8-qZk3=)>F|~Il7u_ zTgM=onkR?Oa;3m_!}+Id`^GnKjy(GUh!Q?y;;o2qzSpmj_#;mfK^T8Jq{QK^$}d~q z>tSp-Zvnueo6~b+4L~_5#NmjU zNQG+U#FPeZ%EGF-dOr5hsn@es z1>HJ~s~Z0-0#=csgnL;L_KM!l)&@h00fb{qm!`dxrzAwbu6;(GR@fdVtYAPDm2dr= z{p^WI?v`X{XXjBscdeIAnuV0OZ;=X%U(7jr+4VK9eZ}I>9d~>Feeb3HL@+hor;^oD zQu@oBnmSB7rvbYpj*85!t51GJpfX9B)9;c>XiltW9#aSGuMk;3I?Ci~L}GrFc)S`4 zqT8t-DQA0cr5trJ=9`=)vN1IPY6S2b!q(PS-lRTj=qqxx8NQQt|K9!CeGNgD$jsy~ ze)Z)mUG4Zkex1lhN3NqL3No*HH&WHB)6Cg6IQ~*nWKRc1(Q;7cLCI|A#v2EI^bC9H zN^#9wRT*e$8Ya_x{8wQ_U{wQ$$6(v`*GLMlE`%q!Hh8(GE=XAAPAb z9Sm|=mweeRXojpb)h28&jiI|}zN`enz=^kaJtOkhp+p9)i}PKHZ5>}M!sTzL}=#@ zV62sE46|}-NnHg_8;K9K(7mx2cG>QVNKSu#Fuz#Kyul!zW`KHln zhwVyV5v8|{$~EsL`V9J_(6o7((yD2pdd|^#{?fnA@lrG-^B6WlPY@$>@-N-Qe)Y8K(kp z6>NxIXYa3*1Rp1>8t$zXIAP1q%gbv)I6gkU3GoUg?KSTpsy-YUqY$#9qFgT_E*|fy zqthwb2q!f+4`~mO4xC#vss^0~mLRMX;99!(siHb!oYRLoEmZSjQyrA=7Q*BE_P>Srxr{KH`d{9^N5$ zwPE(fQ6*#1v)?xKG*@pebHFWyoz(2?%x051aIhn+i$sOqlVak6Gx_zV^xh*GmFtOp zBJI3u0DnH(%a}CIU&~A!SB`L!YN*#cmM&yKB9V0E^2h`B^SxLCfo@7!n9qp#<|w(hHZ+2ha0*o< zm!%Cztj?)&&Bk=J5e$ie>e}hXs-vXzy0ZkfzFclMHbDzOqePO7(#sw?(aL8Y7juKP zqh(?x_v)~B=UjF~2kX;r>@2lT6vQZL$rs5Hza>(XLB_}8`Rgq87GC&FrW;(jLQ5|WpWOQYOL`3@&6Y$uR!p67tXp-67{;L>>$*~nFm|Y^{S41q%izcpJ}(R8 z(SbnMc+$+*c7x|S64!XYh~D?Cxf@DF^SwFiCEmv< z%IF=-!C+8D1LX*T3CWUxI!sCmHUM9NI-~J zg3CQk-D|Ns=fbMhP9i@O=gIyY+`cz0nmH|ZAr6~kt9Q{#rEyC-UC+Aa{?-&@HBGp6 zDo%PLx#?L`>iX%4GRK+RTakDe5wD z80eeMezBEmcDUP|RLUKZxf$o-59c{^_46NOY1h&2=hO0LUZ0PRUTM!oMCH|{;c?Jo z$dG8mxRDqc|ltp_aRf zW_9Znw9k-rT~O5)nqD{R58M|YYtP`1ZC8|^Yg;BMn^{8rI&7-4{b&$lQ~4A6+30N~ z;`Oebk&-1YJoPDEQAQAPcEvp2ZqadH3yb@ADl^#5yvam(yZY%HyYfFM`c0Q!xoF)P zk0$CY?`#aW=e8}}(Ju)n7!>ia%;VcCs9ae~*qh8B#D0)SegAJa9l960P_w>yRI>(;(!0M4cy1_g>VT}bCQmD;&PL?& z8kXHC9hK9x6o{y@Q?J2!nXrb!T1L|O6!R}TMKVaj?nu2fTSy!o{JI4b-zuut*1RD! zQ#2czkm|FHUa7(-ebf7|Cvx)@lW&NMkc+d0licX3pr)#}@xEmbe0`si+H~cjZ(1j2 zHp}yH08+1A^plM0Y1e5%vZXqkLa5q)S20@~IMt&8QMJrbIfhxn5)xi&B3}CfI)anj z%ZAge?StJb%sOG-0+4o5GbFFQT&dcYR_YtI^o|5aAVN-`re|e#m;>`6W>na1=rLtg z6+ShTiJ$HOr5In4rda5```L7lr$zxje;{PzWNdr$n&|k&8Bo38vkG&a+Z6y=x+pfR{}d zBXJ_M)guG@60Vx*kSCW~=ax=Xo$ym4t}utyF$1J&nXmn=<84rxl1h9jh}MJ}X@MXO z%JIccrK^1A3HilTeR4UzXPQkxm7P$#^Jv7g}9 zj0H5Q7T@~aEE}5wF-VXaG^PH^)QY$lYCB-QS~Qbx?t|M|m6C+#=(H+pnNxd)3kkRK zF(rgp7jcAgP-%6+`lzTrEb|XTH{&A|d7$90r%7fJ98rjnR|zhaRg$>{ILw?Nx`6vo zog87*C~ZwzjEK)he{$}nrE!J`NMWJ4?<5z(#?m$d9}uQS9^^#4RuN6XC`L%urNv0{ zvbAFrT~0gx$lWp)wCWL$ zSLdnmp^SrAr6%Ga#X>Txl>v9%Xmn|6oQq;mNH@EG6a-A33X%==8!Oaz zn;O*ainIE?I9k1ox|n8Eo#=UHTVjxtfZ@cbQ1Nm5Huv`C$9SWlqn~Ro;ogU0UcFUanm0wlZ2KmH!iu_xJvcSl zns3R`M+cBnVOkg8{MCGlE&PZ&PM80Mij{TOfq4v>k2YxV939Q4~6FXodvgf7=II zUSq9^xmsrXek80nK`sK)kKa&?cWg}3S+>}3o!~?ixR9@8m?0-xb6uO7b0e0cS7_U> zEl=zzsF&@-4%OhFZ+Q=>ca}~Zf{voYPjSBDiFWi z$9LOn=Vj}J)4X$z@D7MG+H-*maCLPR7A6(Wb{T0IbT&69%4?O2jH~Ok3o$Un>sPsP zZxx6mVqWi#_xLs?&_;wus&5L6n&sDQZy$H5*;bVpoyB+NGI-KVH3pS|@}(9-A$^g1 z1tn>^JnSiENT2@fR%pSfv-y{n4qXKmm8`{y}@3ymA4OPPyAC)(F}L%DUC(j@+l7k_FPpnsm} z+ju%}d@;4XZ55Pgft>HH;i_c@K`2(QWr|e?YKju++2M99-PkB6z3_Qy#rGqL-netgdY(9vH@ENYh(_PE>jA^ydYQIyI7TSWomS4~{1ME%l z#vpR~`c4Ofk9Q%nr;mX_tm~^1aYP-BPC!(vAdNz9hXs<|Kle(lOA&2d}p^&BZ zSdaCX?VORHFcDYSxGcM|-Db7Y5;2qcH1Z3Yqd{gh0#Z_j?CKU3J3DWs{P1YKu!{PDM?dQNVyH<$BUZjbhzLJd(!3Yb-C@aTn-tk*~B&-)15gP#0fmuzNH1@Z&h==$Tyxci;w9xyW}1+J31#LmU<~R%Pj62fZ~%`%-ZI(8WE6jiPoBPFW6_wV^ES(<&>L+R zp_gZ-G-1(XQitEjNGmkf(e1pSg6Yyu6Xv1GvH>-dm-pAG&z@~|2--jW2Ss_Hgmvd> zjscvP4_lpIqhvx< z7ImHc2K`uaH6`V!)jKbzrZ$pQQ)jo2Buf0wT=T@AGb`cK9V z7P$XbW-7mC5*s_POY};HFI6>&saRRffQpO#c2HEb?bg2|%ITThA&Y{V?zeshb%M%X zjw^*mGzcugE)~?2FLfq5nROD%=GZKnH5`!CEfNCuN;N@v$4b7i&HKWcYv99$wr5hD2x-}Y_07)yM$JjZQ*}Q5o-bSu# zW@Kj~jbNn!xvO~!y9}bo*jeH<_b0*e`VRm+s?%t($Vn}q!Kl>H%2(B($Nm_#qHL=F z>hw)9hQ~W2j$4Plx|(l42FkX!#_juCEs!H@s-@9krCdW_Cr!#6Hz#W!YIW~cY7n;9 zBw9qtP-KS^2v4xDXEP>)a2w0H#`U5277uuzvXA2ZFJEd+Xld}p?pvOuO6zUkRcK1F zwCe%85yln=^xfTd?yTcZqW`B%B7*Fz;Kbs|O|VP&GLF*O;?0k9ou zpZ(BP4k_-}uQj4FF)C;P@Ti>RpygE=OR9PM-lKl(0%&Uyu`CL@$c_t)ahMwX zPDt>6<(15Aa|@LK)>QKf!X_|0ROi1R=oxO<6QB9Xa7a0O&KpdeM%_e9OKKV4QZaxY zD}%vW?%S#kcfGs@6$_YX zLr_XCySEUMQIWj1ZJTmqA4V(?cUd((R^99N6t*6*`Om7}o$u?Zs8$DH>4h_w=&-Jc zwk*XDP24R&G5mUP)SlCbl4+*VF^Lk;K+?CAk-P&9LzGmj973mWgj+w;NOIk}oDh{b zae(ThDv5bqRxvAkC)C@_;n2}drPj9PZUeXLWoz>RYVoVl5Vd&emG}+m-27XLTQgsA zKA@4RNVyfUTzflFZ4Z>0Y*Fruz{x$V^?E%e`Hjs4cb2Wq?wM$j)TUl94}bW^!9uCN zFLXhQ5B7xn07BRCSQt8Qsb$02zVWX$=ER@=z6|9D7JJEy%FSd>-7_o(;OpfEMXG)N z!euvY`j_9THiDYKoD1i|)UvjLuA6E}(P5gnHK9-)L8nj3bK;J2%0J9rKQM2fOqx86 zAD1|;%pYu*G@|!xQC?=r{9<#MW5pteTOJD!8O@8I;&2^qb*{-v-q zzZVeT$pi%K_lN^zT@*yzrRQI4wN2Lzh1wIBI5Lo-G32R@saHBPD4GlDX{jn6t9}t#7VcdQ3WbGO45qmnbU;-|7#=!p z!S|Yz+xJ>jYC@CyhHpQ}BE|ruS+w16qv|U3ti{e=%tKi93N6mJJ{KVVpfAeuojrkH z;Ko({Rkiu#GPb50>alw+o6PMN3*4IXJ`?k@m1&wDPgAaTn!6kOCxH3|;MX?@AL7*U zlH0WQySsewt%EIlmshTnr7h#Rt8HbfwSOTh@3lq{==GwO*)p^mRnIvQ0huN?G=Q=) zEphF7CpakN!_-KZ=Hy}<;+fD^qspV6t5IwLSw`gLLX`7zQp)G`5eGFaWAI#cpx{_9 zq=K*y4m4fvcb9e07)MAFBlYDca{0c#P_1@a18N$d`oQY(i`h^a1{vNWp4cRM105|r z6MF_hUd8r|y3fP3YWO+TySiS0=3tazy=#U9)pt5s_Vyt%@9gcdLPThRGL{-pG?cFn zDpC%rd12aa2PKS}Vdd=&OIeq@{U^*O#l8hT9%w$$%#;T9Jek<C0{F?|uxVf|gK7YI{eDZ3lE38_0iiYW1~Py_FTu zcwJ0-69Vv2Go>y)o*P#KIy()#(p6H&%!P7#a6XTi=61A=uVN>Ih8|o7-IWJ6ke1IeyasgWe~T-To#vpXUHyP2lScIQKiqV?1|+_USw8adFrfXxLQ1O` z7&hT*4B01;Md=sDbXXiQJ2}ZOdHv4nvtMXTKhU>+ zkQ01Qo4&E2pryN6+M{@3e36~J#)@sDKW@Mnwc*+$QQBW@in z=IBObndLw&-v98+|M?>Tnt(d3Z+md~o{!1jvb$f`@gp|>F%X#Zk@aiA*MTzqnDR-` zs{R`bEXjA6^n8CH1jr~nUKW@?yM7+w2&-7YLxKu$O7-|NUcYjoKq+kJWMh&U&RmLl z>#W~%R|!81z@s7b`G@wu4hOydEzkM>cE-vY?&Zm)a6q>Lx-f=dqiW`o(>5yQK_g)O z5J;LyXgfkRIVUG_uXZ26}oBKNdD$fc?B6 zkiTHsTWss@>GzwWJiJc>HRW7dp_X(QeF3b6D}$2S0oh9Gx-JA- z<{;h;E00O_zsp>%|I5N(U2!T1h+MdvAS}UqO`}ILG)@47z?|tujh7^DngZ6KZzrUd z6*Pmctx$)n63ce~h=4@uXlKf1TJxwR2`dr{-XCi^sp_Au_j_dkBiz^&hy;MPGa^yB zq<8A@TtmXp1d}_5l#~5K3_w+3JlhrV7)4OgO>8*Eg}r-zDdJbX0vmU9)~ULn{xawv zOK&pC%+A#r3qKW=x({4Xm__UCI%pIT>Cr121+| z<)+IU*O!d}Wyv%If!atls~u;9j%K7VU{o^jE$!{EZ7N#1elJT1$umk$l?2W-eXBEg zenLinI!7%cLPv=7Oc)Y{&HvQM1p$hw4YO|g*h9&VoPUN%q(pE5nH3O2OvE(X$XH&$;8C9 z0NT}0<vl585-AIEAuAY@&#uVeC`lirGqRw@S9I z0S*TPtqK^iy&k_N#NC!bQ{~Kph1uC~jP^Xq>;6f=u)NYQHT#4S0y$*P{rv7B2axIT zn2Ft^lktwrYa9!QG(roe1Lk|Rj3{JTzF9T-EPHHqaJsXr>$dt^X=(PmF~`(Ug64*A`ufZGZbIuRWwyE9 zZD-Xu0KX^C={6^Uk=lQ90Z=9vX$Ws6ZFT?o_c3TeBoRtL1bqjfx@bNwkt-`D-=y-9 z%kPH~Mx|og!NDwcuPgyFVDh1%|KN<(ZZtT@Va_n;U_p9V)>T*_UE4EGp|I8@1|&m^JmXjbU| zg`gmQVqr@u2Y+qCR=Sq&4Y7X8h=qNj-|PBVcC4E=O*d(JGck`FlZw?@XH*m4tEe|c z0v1jQUoo`!dFF7Cgen&rv5(SUkk`Kj99+8g%P4*$9_1B&zJVqgeknnTn^WNz&Qa0V z!*$=fZg~P+8-oKBT(_j}_OptiOM<-YqRzgJeLD|UBI|jrlL4U@dBJp{dGrFu={Jx% zUa_+Gn-C`_M2UaDb!UoE<6j|x;dKn9RCMbtKSb*I9ka6sYJAh&@GGfHbZ;E9AgugA zr$HA~mpC5il*jJsF(E71Y|5E@948r?c%kh{9q3~m>ykGJ)9;nv2VF5u90+Z}PdcCq zpYRWZ*U+$#i*W=5XzPZo8x6W~2XM0=MmV?)zmcx|4+lLChoxAp8gjTriYXvAHZ}rO zTcd{~>5Y$yY%YA#%0KKF52;hD_)lN{%~}4Z_x^Ew4y8Q!6S6$~ z{J*1w|19q+zNyeN`Qoo;iC?YW50vj81A|f!jagfP%I+^`(Yr}eCL8kbVG0V=;cjjS z;n^jSp`Oz=`XA7387Wf`%gW{v4ly^TRSyMV_i@yW5<0m-L5RMDf8FK%zg4+;M>jSu zjMXcFAZ|hNAc)fdxa@9Al*yhve4B!TsS;qG{Wnt_ku7UUHJ5-%0bV}+MMK2rsg@LsY7-)Jbu8qMXP_ddUIx)p)!!mMl?6R$stm|B zEFE)bxy%8S^8i}771o%#_mqWA`#Io8-#}G%caMU;IQW~WJej^RYmqhSA@Q;?7_W*Wl`6CRixnmv&kF@0F=$G6Pa<1 zwa@N^&C*HS90gI3CC&pBwt>}v0-tbzuxazYe(mTS()>nP#BMB8_KrDEH(im*NzkFL z;xoJcLP0&D_Nk^Yo;|;)K$#l&cTnIGFB&$z`fpu~IO5h7;L6tqCw+wp`-Omr7!ye9 zutVMfpp2Ne$?(!M=GOi%qcA5OrD$Afkq^u*tHU<-rbV>t0~I~n8lOdXrMBY$5Mk(h zX|-^wrU{Ufb?^r{;-@X?z%iaX;(N+qx?+Ev~A7S=<`E z5FOjo%9541mz+}Ez+D(kidR7fJhV@3~-yLjjlf_ zzPQ40?ap7eB3{UVxQZW){N^$ zNp|q^HR)z+O`UyKX|~Q@T4NuTqrNEOKelcajk}An?CIw%s=hSzDeUy+``T9I<=BXN zxZZIyeoPUzEdL=?&#e$$@3ts z?I#KD4gDX+-aD+x<#`|W9FJlJJJOXRB2Bt<5knPFkR}9F5_*vmFjOlj)ldypx=52= zLcj)u-ig$xbOJ;|0)&$M7CiOyUf1_}^B)lQ+1;6)x#ymleLT^YPUc91`{Nyr7PGv+ zeJOlRI&d6Jm;b^`D&gFoAGc0HVUsf%Et+Yag!k~O<%y|&i*#sK!DFTRS03uk+I~N8 z2QGN|!5SykHa^cNY>mK)>KLtJpQ|i_(r0eD7N<(NEq7YSM4Z>&jBc>1)8H;w7=c~G zE@Xz=T6L9Rw2+QNiD?}MDYXv}t22k6vVYcs!n#A}h}(aZeweiTJOi6~>_p6$4Yy-^mhHF1JRUxLcxhW7Jig4fOA+zi(KEko&XFN;UZ^~b zlTsDvIV}pd7(n#^T1P9`O8j?8a$>{?)P*a-$gO34nh3GAW%cr^UE@?rPssAB@ZnZ= zPj6eCTU-&ds=3wm2Y$4A^J-Kvg}F`#i_U50S9utH&-KI_x5D|hLLX^|J1H(tbHof2 z0vB!X_!tl}?Zjd3C)hk@zt*hk`y^m2Smuw6K!TwqR60J%;-(1kuCN$smdqvXyX{?z zS3Eb-Ha}Lkd2V{lvhKOqiL0Zbw8&#%eb??sn=y|5evaGRFS2ra5F28>Wz$$Nw8Fo$ znoe;}e?Q(zlfdZIdb%;OYW_-?iSw+#UodsMnT@B*{M_DzLK?6qDbr%)vf}zalV#V9 z7&S&j4b^I(HdaA1v*h>j$EFrB%=1Q)fvr?UO#{nUOie?Cr*ttx&2D?POS+@+UN<$r z22sjyWU!|6A562IY^VFDM>1@#MIX_|x_8a;0_7hD%aTB1on&3{k0n1p$&f_nD&?zB zHm*`U25@NRUFEN}>m8YiYdFR~ro5fHq!N`|)iyRZH040@s;a<1T~hSzG{efE^b#jW zZy8xdj-=$H5rvUWXEfDMIXKfkzozrRJ@jm1z6|yMaCUWZLR2b3Q%b8>nSC^KH3vT~ zE>RzDzzV&nFlC#1R!yK6Tga2d2DLq3GM6#9^gJ$Bhpf!D#Z^kUvc7>^gCLC4yg|RO zk%vhygr`zr?VDLWjT)=xEiG)D zo3}?)%E%~ku%|F<>ZWAoRTh=h4&E}Jc>AuXxZquxsg5IGj*cO#7JQ5pl|O$ep9p7` zOq*xd)N@gX2H0R5P{1h3R@L4i45Ct?+*QSqligZi8Zh<~zc$pqh*kc?O7z8x4lJo+ zcS_Dr=c|pEbwhrCVUg~4L5&IG?d3Z>jPaj=S2-RZG9)>j=XmGH8D~sgZ5=-3r-y_f z%pt}ONVqoo3%6KYcA`g6IMY-de9Bm|uw8B1CgJuVs5S*xAk5Em1R!8BWi^cgx;_;? zh;{M41!hk6Ap=%zAl@4SziepYOCsb1IrqSylDO}MYT!9FH}Aa6b9~PJCl|9hl<0I` zctHk!+E@S{*O0nDb} zu=HI~ZPP6a!t4WSIa!+ipm&YJaJV|39+V%!56-1z%`dz2LM-~#)m?+xtsGM&XwSqP z>Q#ZLP#8(NV;R|GBFejHjF?)TkCwFF=OBDZ_c(s9o0fNlmt|f<;E=}C0@WlF?f9zZ|-YeqRI%avUgU31Gm z``m=7>$(Kk7*~+`7osKo?%hJx7hbo}vd(^uUQ^`fO@B z&6W*wwYpc|5Gf*5kuZ`koImeGWcLrzsJxQW>21z4A?;GTtDcvBxvAq&{_}r7S%Wpt zr_ceMM_EO#NLzv$Dh>fe3lTWIud;V^i&vCaO42h84pT*@QrG9gGz}zqlv@sHPiN+$ zqxK?~Cx^lW4Xpp*R$B3ir5hSXCAN2#v?RmnrKil}N!_?Rn&52TGU#(9%^#?N!P?iR zfL^{*wYk47ci_>(Q+li|^27XM*H{mpbA7&qevQ$>nq{TFfkG$Vk2vYJT{;}LmkCl2 zWPocANJXtk-@IPB^ zE6HfF=wsIopo9NZK|lPl6Mr3n9}VsuBq%JbOw^whz1b{AMuXe)4xG*sj|x(iCbKka zu{T3DYHPb(R# zL%xq--2o1P2DYxj(~xiU;ikPSW95W#9ZdQA>WuP&(oa@K#!zc;Sncf{R>v?oPr;V# zJS*D#PNCbRi`o-2nvJzVNFBr)HfHyBIL zLxMif3bO^YEE`8Wo*XVEsC0Hx1wg82rKLlhJf=2cdVRw!0}3wkD_9{^=j5g5Rb*Fm z01rTV;Ny+JC}V$Z>8W3yh_8rMTkG5|beT=neiKHaczYwNe5#>Ek<%uo>NZZ;%oZQv z#dozHeh6*zH<2Rm<0eN!SIA-h*|5~&g!W~hF!B}CA8uy_ko%(VTRS(>=(ADuW`TY5 zuZMJ9AXAbf;6&8`MrQaOdkFWbAMJ?%VWfq$Hd(7zPL!y`_pklhuWMgdw;jq z8PDm*i*6WBbELnmd;hTDRNU=TabI(EK!`OcmT?#KW1WncOss{nveS7kaxeeJVI?@)Aw(#lU(P!lU-@O(o@@%Y%U z0Ri_*R0U3xZAyY3YNN#^?>Hc0M1iL?jfnI0Gfo+^GP+kR zRQy(~`0b^bSlvr)TZI|LX62^3m$oLpE>^6zcK5H(+!SU|l~<4E0y5ytGZv$cYVhgz5a8(mg0Z;sQ-D?js(tj9Ve^oB2Ka|)BSq! z-T!2Z-1_a^gy?TwHILgW0E$gpp7Y(M}HnN;D>(Sd4AUw=^gX* zGXKF%{zqo%Vf?$RS$2j^Ml8fU@Y>k~^0+3wCLxG2l}0yXP=7n8-vc?ZW5V{-1uB2} zEqX^-ew?(BRHL%rzMT2fr0=pTW}$!j+uff;PD5DPX!69~AI`)%A`AOP;o<%tl;Xw7 znbEPn885__x>a7%U0`5-eOo4wRbEL+Y0g8CU~=cqt?VejjSU{9mY)xODVlx==sj&n z)1JI+e^vFFdHjBl+R^wgm*vjrK_BaU&cZjpp{lU z4Y-mH5Xo)#cy7CQG%b$!@I*?Vhw`Iozzgnv+sqLJDE5b%A>-AX-!ctmh@&i=%2H0N zN|`|*NV|rV?S1Bjf=#Ql{^I2C<^kY}SC1j_3Omi)b)?SNJENMdMK=Ngm>=HjU85{H zY6TUHQmh?Sy}%zpncFg$7OQ1kQ5THmsoE;FzJThfhgR07=OKQ&up;HDgRBg*<>lmk z&HC1ZC^(fxAT0$vl&yLNKN(2U<#KJOf$#tm?Q6l5h{0tAv;ZidkRa}Kk$6EWA=foq zITR_RaeoJK?tXM3o>&}8J3&!$#=LAs*?;3x-!YyurL? zE99A^9|A-ck~fpC7hXtf!1+-Y8o^e|xsEBXSB7o<;1TvX6xwN3dm*YK^Tu$(7MP^m zl8Tn!&l@>+s*t&2cba0+cCe|m`d63ua>uLX?B{vKvqoQK+Z)sdu5^-2+#DU~Q&Sv5Xv5|3HbU=_dkCQ_Z2R!(ieA{c;ov zRaXU#(Q4w{nvfsHyXMTj93O@_GTw^Lq)-GUEmM^Pr*-a5S~y77RjzmIx$O#x<3%D{wjLv~Ti-mm_~myOk{F!RYA&@%R;wE&mLL4vI{$UgaK( zt@a_MpCnI!(Sdet&i2tXC@@D)R!~^L)w30-LX5 zhJ#l7t+Z;=_x5os#qlVPai@5qv3Ek;bQLZh=A!Vh3Vk+6)*bTv`NhsUV1~+k_AsIX zA;LJXw!f+F6raRFaFTaEkFqmrd^%z`QgY7BfS8pci6Up_V~v9P=_Q0s4-n^@7L6EM z5Y7?rS>`W;Ls3kqE5r7s7~Q0_j6#Up36DU;+`1PlpkH5DUXU^0=4KLzamGr_z?}l) z_W`}TX8UlVKU*ybOlRtGE@o{){&2K`Rjg^ybP6NX_s?i!%P0A5oM>jAy8ICIAhelyJ-&#K=LTb?1ymQ|{ zRiGpo+@Dc4ZYRq2^7DG;tJybNoG|y3%P(^Fgpu2*l*u$8W03Dgt*}!G+X{&yYs)JI z|JBUf($LA+&Ynm~sB2WudGQ^?!MRUCD?5Tlg;wu=~3| zZBBBg-VfftjaD?SCzEqx^7*>x%plyB_`D#t*?`N?$gmJP@S8JVM{d<+C{&I%9&-kC`gFxtC>-*EI?|$*F5I`C` z&S>|i_qTV+s{_cizt6^wdHw$V`|E`3&)Rpwx83iFfI06XgI{CpeCIsye9JrVWcQ;B z_PCxKY{{-O{Pn9{Tll}@Q2&paEkV_Hap3O1$Nwi@+Wpo4Zsb3a#O|N(3jKG0ALBCr zGiQlZzR3Wcb`0)}9TV`i00-~*(-0zxMsVll!msK-?p}3pjYM*_;hvg{; zY)e?H-@on(&}H!_@zz{7fJ`5 zOVUi;0QQ^wbzq!rC650Lz)-t${$O~^3x!JKkjt0j2kb8CTI0xVTw%bEh0_6{Vq`4Kf7TrmA_!eNl`sVSzntCp{q1F9qtj&b zr|3hTn^JSb_VWhMBWXjXgcf#_!Tu0^j>@oP%0|q;0m_yyiXdj83Q&f{EpxaYR%sph z39pB?-XEilRa;&G4~<;ym4WW16>nW!F7CaVl6i}p@%mL(jDMAHj8Y)VW)ql6l?|^z zK#Q4$DpmNZ|H#g-nr^ocS&5aGtEz2!J%7Jd4_y{X6XDZr?$5j;AUQp*E(l|m>;%Lr zb?A*4nHE+DOlj+@eY%4=$`9FBxFc>T|1JHU1E2WG9Tm+gUkMzt%hysx_qBV-S7_+D zy=gF*6LAe#<=Lkx4Hul)vQ&Myp)d%%Ffy~OSN0|u0EM=}EoZCHmz zSC-BeGrWI0kbE^EOTB39Mocp`@eL)9xCqz7y3rZ{N)wrX2AoCjuv8L?8HA3!g?fKG@jei@#ONm5C%?g_o%gt_A$Gm4Ep+YjMLLROUa}HAs)PW*tfWJOJP#{wQ2o%2K9d z!zfW~m0?Hx1c`5QrvtG!0{yp?gegLz8WC+K?8L$TAHu@BV-w03V(8-}X5c_%gO5cu0| zUWA2DTU1&*6gH}TVMEL?XE@R8vVCy6QQ@{j$Pv`{jjJ9@WJG{txEMJfLTHNuP45ZK zfXkobtlTPPtfPfJ8>~!jnREPi89Nr&s1Q_TdZD7jgyg8lw3LTx2%Cl>tf%7UK18Xc z1QEy8X~pg?&F+A-S;~Te#017~gI7;L{9&8;W<-)xTFq-mrEDlHCDY+@R#;u2B2=&^ zLV&2cx$<1$dQ(ePTvMG6pAks<{At5%Gr~ACHyrF8;~JTcl89>x?1}1b@yZ(I51fg! zy?$-irCy4d#;Y_5vqa=#Obz(udi#I$dtzJscA~)Dk1nJSqEF>qWW${expCqI{BMgT zb8@X{hiZ$?3;EmpNV$^|*P(MU?E> zU+nW?aml@+#v+;PmWD>WHtdsGu_74xkq;{%w|22j_=bg+X%eQU0@~Z(YkNK9y17oT z)(%2C-~%`sWg>=Rd7jj$(af&MNpnT<3z`1LL0ao$x^whm5C*m11(c z<{Z<;k*gHy%uuMJ!dSf6^Y*LbDFDr%ml7s@&0E2{jTeIJaJjkBtB@@cxq?CI;@N6r z;$c2|RByBWw=Hcp$Rw5tt!a=O9H9hAKMry(ZAbp4xl@OAF;DJvZ0cVkDF~X4q&8y;uT;i3Hd=h>J&Ly87Q(?D_ zPvF_ww2mmStjezHdiPW~-*H}L)3K#2+!;BFC&fVsdVbD5r57CO86*GlYk)#}~z-Y+~ZpVr4C4I;T0fTBA zuH?QD%$6+Xg=!R-(%46<>Y5I^LUh}BNK1_9xseL$^Ol>rW|zGP^?s_Yp(n2@J*cTo zyfeI{p~BmnjJd_bHRtH5*}z;zC+ce380oqOes4ussI0FwjO+VdS-Nec>R+1IDAZQ# zJz*6)VPO9 z@e{NE_C|#g_t}TpC+DL!Vz;V3Ss9b;4`45&7nWsZs)i5gS{8YyBuT_|W)+R!0*&@W z*^ZHYj~eU7ALqnZ{wSTd57mkY7m2w+N=Jc(wKW9P-{isf%kL};q`K|<8Q55V@+WJC zinRualSgqcNUf)nPOj;AlMLuUohI}A*3xF5Ns^bP65uM5r8kzSzM9(!!LbM7(rn1h zB6V&9A9Ki5kht0E`p{uN<3&P)C4XD=GF(eHUuuDKa?>Ogo3eWO-e&Qg=+>x)F@5~= zaXmi~RR#7y29FXSpIfJOJ$=Yj7d9!wabiTVmNTMbB$^qgFnd0&$u#?jHO3uN9}2D` zimG4a?XtWvVBFf?DrNHRTaOgk%#==_2!}h#w!Oxz3KCUMK7Cpor0jLgR$wpukCuug z(hu!fAe9$(@dQlMEUy5!K^KUCdqFha?9-dpzU_*w#wV)x7RArrKKiY{MaU2j}J=HBi7ClGuA_A9Lx-hEta88 ze7s^J;b2{=H10S1<#A^2)W_$s1P#I&@1X6?Ei6-%)8=4bpItb3!%A?3g?G!-ePQ_U z@bJ;P&8yrzO1>Jl)%O>>tVnHjPk0GiZaEd@3Ud7XEfYXuJgNbCZHvr`Of;rzU7U=~ zgs@c}c>#vm?C|`(my(fNZw2zAE8t+TR%Tw6-^kSJ zOyYYR!nS5tM8XBWHr>AH+wmE~?_K+t+0;y}N=4^BWWb42k5lgk&Rd2msrbw#ia@yi ztL6p$;2q_GhCnkO2tQenaTYKF0v-e@~ z6&sE{YdiV;ceit}N%Svur-q73?*lgLVwovXBK^+&XdfwHOB`-LW~{j}QqXZ6zWBuH zGr6~D+r*$PW1a%b&6tIp;L2y9O!=VV1L8u=k+TO*uU_KfjS+!XjGt8tY!`li zl!b+n7DX>&eY5q0*m&oWl}?RHOcL{l+trrPqk5fXLOvNJUN{U5^EK!jBtqCtBesCN zYDbsw#nx>T&vo%N!!Q#r$p<~9)J(U-idfc4lZdD^Y&Qzdneu)53zlqy&0IEY)kFRy zrk~_Cs8WG@p~zPhyKE_`sOdMxZwiu@yPmBEthTb7Ya<^}zU-TLK6~TIur@7&n+@65 z*S99kXOTK!>i)fz+HKTB&7Nb}4cD$bQgU@b1bd^v>f)LU^%Iozzm^zq7n^OCDRHk# zMPAwmFB*SU?T|z|aQ)?is(;1ordrVRwVvFS z5rx|F(;E*vpYWWFwp>inFyQavRz3Mvrg%&zriMycb#i#SomqDb>ECR9z;OdB?s+&A zz7G`oNSv(+b9P0{MU`InY(%4>mzy$}wrA{#$xNRSsMB#3lD2vQPwNfY-b(S4Y{#FA zmF8BN>MCc>VPZYT{EH}Zy7l-oy>nhIQE2lT1zRJ}{H^Ml4?W$w>cL!R^TI~!L|gjC zCLTyTrmY7tRtXB9(lZ4s9AmkhY0F>GVdM!WhOBOyjF>jyFQa*|;@0E!K54qIe+%N( z6LpEyup^_Lc8+%GjT2K2!?%i>3K$#Y&kgN0uRSK*(!~3Jccm=03{P)SNh_L+hqoXW z?r0?y|AhityQ}-6ZQ|BLMvrGPKA%Z-BPvo!l#q|A#l7&nZ2b6LS(~ZalB&nN{xBc% z=8XO2)Lyi`s{I3nku$}kZ|n>>im%z)*BL&gWpk+pXmhuv34l_iqOA`fgpqaL6W+*z zIAYJl4BQB}ccKmfWc6(fgn04;p5=g#^jI{x;~jXu;P~X^XYg#&WGRMO<&ibVDLHNSDfx7SXhCUm5~r8%pxv z{Xxg2xje4WysWJovZ|^KSYrIwzxdoCoWkJ1g-isuxc9I|1yLSqGdy2zmJ3U5E=cyD zV2{nL&-6&%n3!Rdy)Y_!teWYQ8`kexUgi-n^{q)1qY;2F_0JR3@gwxQaeLnLIX@*= zI9~pKGwr!BVtsWiWU{$+USYc=d!PpF6I*nOofICqu?`Lp;RkQ~`SP$+94De-De%-L z_gEeLQ1xndbi%p4PFo3k<8#Zyl)UFNxO*J9J$s>9*isGdw#hJ4*H?D5ple`{A2M}} zVx8K}OXqVzSjTkqh+kWRhaVN>0(|I+B2Myvg+44K=T)7$G&Z+-eV*6UO)GhmD(Rw% zcNu4cvkD1UI(KyawSlGYvv@1l=FgvORqTblt6YuT4vFGGOqw1}X!!u%YW3dEP**>> z=8D3USnv9+g$?R^cgW&r+4`RN2SQ7J0dg2mK79yK)q%y2{<`8~nV zHMQo!fWOdzmME_sD+uVj#PUh!!J z2OA&hHftUXPWS9&*nbqnv<_5IH`W>clr;vlevlLUxLw*Cg<0d0Fc=sau~R`>#&|PI zkf~i#rF25V;Rrk6!fhL6f4lpD<{Y)YU*D)$#hhf{-XY$vd#^RI&AB4%lz~(! z;sSl8xhp)F$m=xUG{=hoJX&BkZGjq@b8E+jKqWzXvB|TCA3*ZihJ#d4DVYYy!by*L zQMPE`iPRnlfNj(Z!_o39XYlgu@Vf@qjCWKE#2cl2%?tBveTAK}DL1Fzz<-2sj)05} z&d4{q+8ebxufWE;$#32q6TIJ;MH7_uQ2w(3?!pL)u| zQ1SW5*d7UDR3kA~?2@+~YM=%j4BwtmZI``!B)(DWo$Wo&oEpjmet7aC2|7V#zR{-B znqZbN;*#uB@H}N}*4-y6{ExJ7rOQ9x6%(Aed2MRPxK}!0PwISSF0Sy{FsJ)&Zm4dx zC6n-OTsEnxON4wg8haF_IJ_?O<463xw9xS-lbphD>yZjx9Vd~ zg@vB6#(#+QtW#)*16Etg+HW!R-Lf8^T;vtcv5aZfH7#VpOmn3`$+rs`1=`C|45&jr z>F6ULLXw=@ofIx=WPFl?L8cHd2v2HChGbk>*|rZ3pM^TBP8trSpldmq^y3(I3gww6y0~&PkfCDS|V% zk6O%FHsx9u=kb55m%U}m(Ntd_XcNsscQGNqEJN+rFAuA8SJVrN%??_o-fscEUG>4_ zLnyX5Xp1oA#F$%*XrR8)YAK1Cv3)S{+v|7qKsWW6vYaCDB0ow=m=>FwMhq%bUd^wT zIrIo({i`{)My~)w&v1+0#~yyhNATxg6u!Uh*1L1oL@aBUAo^+r_yb_#2W0_Ds(oMEp787_@f^vzAMvu z)@fDQ71;3}LnFMI`69ScV=a65ylU%JWf{K(-rj-f%B6m1Fxs9V$6x6#`Tf|uP5{E3 zfGZR1vbwWk^CY!YnW(DDF_BZghaL{>BVPkPAYdGn%6jT1W%lGb*ps=Kffjw5DSukZ1dH091p#b0StmO55R!Lz=IIaehb}=4d#6lIsMKxq_|#LaY{8s*w@6S&Wp7Zv5?oV&q@h#=65~ zmu!$tpJ-+>kOWr_zl&PqQVCd?oSK7<0#o}bn}f2PF{KK(9l0Ql%r9G{XC6Uo8@Lg& z>`v%dd2BqLiAnXp#&Jn1|J=Ri-^S1^7_zHRm2*aL&Y*U27skQf{$XEbz!36zi52SZ zbl281PDKP9Uq8`@RZ!9E=%gn3$fqp!smx#;3?}<3U<0-^YF6(*; zY`!$2kDi{`EjKw?pR!<#kSz6o%dOgWC?%K1N(c=y`Xq)U1_(9umzsn`LEqcE+eo=- z?|H}?{;vwUbJL*6Zm7f(+8cn3ND1@j8ATRAR{ z-WrK5*PnZ%Kv^o6Uvy?NSlzo1CYL6N5x5`k!eYnB%1j2sp=~x;-jCxS#d@daUqxB_ zY2wSf%maN~#{;e`mPH&~F-hzeFVNW*UEC<7Xp8uqXbZJ+#^7246jPM^WF+RavXF^E z%c6_C9fw;x#N(;q-#<1?iy|jhpSA?_?zzjpL#-b5wB&yo1~Ma8+o3u&0*V8rUACP3 za0Dw_#)@@lXe$2rwJvarpoea-9C#OOGoT-p0^_qc(Pt1SwE4d2P;cj$@tgNd$0v}g z7@W`GBnH=KfLk|w8k2q`=WzL8(&U7g4Ow4u6 zkMcPui_Pq&G$4TlQ1{W?P+M!Ee;TUfQ0{GN(fBc z%ku{vfY-N->Zk~d)%lbP*o9J0m7ntiZyEmBqX+%keMZ^Y*yR1@HlmG_;O?b($0c4k zU9$9u_E+~e*v$Nz#+r?0E^_9UY4Gcrb9?)Eg5@KI>pmxg#Mfq;By5d-66)IZmZ8PQ zW(XyAABUw~UZhI}i0qxJz&ZOYXvb><(x55K-`rUFKIMLE;d^ny4cS;F;lpvqO$v2I z^53%C^Z<=4J$}uU-m>D19FV*|eWtuBjehyFr9rYnn(y}Dww4ouvJRQ3Q1adQ@u8=T z{9#~Rh*Vnm{)SaT&EZ#Z@&PI`RE@+Sy+fb$g=Jkv65^a<;Byvdyc*?b?VL)!`aFu9 z9i=@KNzc+@ve^Y1wOeR=#%kKg%f0(D&60j& zvu&IZr4rM?W>_>fpyx$i80Z#OYQ0fzmf%iy5g;Fo)w{%Vi!GW$lt6plRRO@#k|wQF zWYr+Iu`0zFS==mjhowsvMuIeLoani*7(36EFvMLSpJGtCJPlqINMJ=4!#&Uq^mS=I z&3Fz6l~N&?8u@#|WAZJOkDBS+r?{m0Bz%QWYXQuBx@8J678IKvL*%&B@U)a&(s=fk z>Al?EMLuz(aeFR9qZx~t`YN~CmG-(f2w8CHtQ?y;g9p5;4OzhBhLnSIv|>c0 zw%Dml)qc9GC*aAL6FBs6xusAl%D?KWszFiMH^uV?Jx($ zAz>iCxoJk$e(W0I6qjd)HM1Ir;0B9vR-tHA@z@Q#&1bj?8QIs7oqvht$&&Zk4H?Cz zcEROE{*jx(I3qesgWQ66-g21RR+3~BK9i<=XtQ_nun9o~UAXM$V)d%!k=$G7(Oh6qo*(YBwD30 zy2x_EO3;p-8{mg6CNP zZ5q+N?KuQ5d{<;}LxC6bF_Rva@Elr>Jj@}SX7Noi^QY@@%(Tf&OwXCb~YUJip+<;Ta!?`d;cxubGW#Pf+L2#IU4>Gd}0=V{=AuxEJB) z{HWXpQ43pbc7`f8O@;vdPMzePX!CxB^|eus6M=w4I3-z0IP7BnTj28}>Ryub0CfSU z*9Moq0RE!o1gN={%x8*{iikKTRSft%&dC6|rY*QzX=b{)M$GZaqj8Ui&$KXQW)&D# z=GKfo-1f!Qxx#YG{QS#-7;bx&<6vf6i z{M={I6j9a3-Db0La1536&PSRiySd2JBy$FYt?(X=k!pGOhPouwg#?3?`kGAo@4DP= z8kfiQXfg>M_r*T77vCbBqj|iN(^4t&)hTwRtt7bNLQ`rU=y7AVjB}LYD}3Wo!aVXr zI7voNmG$q_2C*e^8C%bfZj^Hn-0tB;DSEeJHa)r{_$(BEseF=d{6!vT^O1fI2T223 zQ#WHDFCn|MQ<7)p(6hjfNQo6~f5n<%g~`nNnQz)OCV^h@JQ}NY7458b)b!d#O7@(f ziApwE1E;AQ&rP`xS5w9Ykhxn9qn4)*8>J;~RM~fH4x8D7)-d0WV>A<#-Bb$I*P7kJ zP$gTkX6vMou+$PZV9y7IziNm|1UAbpe7M*(T#$UxGw?BHWMCLCX%CW>KZmjm( z4$1ecyTO7i_P6R+)Zk;5r_3`KUU*@(hA=Ta@X6uY;BL?I!(qa(yR#7f4k;AZ!=a}G zkHt(eOS2lo6LDBg5r>JcQAJG*UGg4<=DtMgu!#HQq%tU`Y8N~MbTezron zc>V~)c4@c#Ug5(IeFSU)Hxt+Q?Bf2OZjl(Klt*ab2)2R~D}2154G3#hRlSaS zm1~t6-5T9=_x_-}IfE9Z+tlHGrSuc#>O;N?-LqfR#TR{kv>Pj*)j2coG-$baZzeP8 znkq{s{b{E^JixtX**79yPq6r15&k<#fXTh_Dr#~t4Fe2^aws)pIk zyE)WKD32-Ey7TI(Xs|kP^19dtMvyrKi?;{z)zFp%dr#}kNv_!Q8xiTv1#?X(6#P^7 zj`C<^ZtiFggT8&nb;n?SzTUtY9wC*^V*ItRkVDhzg+ve+8JPSTv_Cnzq$9%TJjKcT z8g#Onpif525%yiABzRqeW-W6+4{)eOO-ZqNvwSkU`PH*LcU@FrqdBTvt-h6FfB*bp zY|JXZy>9F3cTSf*5`P;;^;!Zv+33QiO`ylrkJ58$goBcrxU z?Tlkvo~sZ;g6I^R_PdkB`C2LHtUOiQGAQbtwhPtww~o$ zoR&C2>C4-h;E({__J=6dbTPx=vs1cXvD*clN!~6E+zQzVwqfNaHeRUV;n(}()NB?W zbC7P6TXX@XFVVHbHq5Jz8M?uR#YJab&rXZRqm9pf)+AU-Uu7|g>oV>hy9{Fn%u&k$ z#Z`Vc{ZY5_5S^Uf`(ro8pAV=my+Vw5{>+l~>5^KReC%HDGON+u76%NqWZB^}8(m5= z9*kST1$9Vrx0qodE2U334GRoZ#6~g4F@>}@dY9btQ)4gXquhjL8mtv+dP-D$H?9c} zzmdCL*_4^vK=^c}_jcy@TCMG)aW3?4?m2^Us0%!qpa$jm)fJ~$xr_FS>ybydCYlXY zy2x|A*4}>bLgs#k5hw}0%uaq8+7Hp#7&veznYJnFwJY?j2F?qV^h1~m#KhTL*{J$$ zSLrUPS-jK&egW`d?hPs7Vv@b$bI$-NwT~Aot!WeHZ`5K`y8M*RH1HKVd8z^OR5nMg z!59{E=Bv)sW9|eU)`Q=m>VLUE^*f2UqPyTWvdOP=!uf*&ks7n${@;L$n|G%(4&ZWnDe{aa;m2#$i=Z$!>!9vKZMf+XzOziiY*N}c z6z+2-XULg>u8FzxHUN_;Sb8fWqSku)J}j#>2_IVIjaV7v!}>q1aec*g%ltkzQ>ZuN z^U`9Q%Ia55?v^4S7E1! zNSl6!?9}F37raBWnaHcTJ@6=1T~znrd}bfdM&#XIBgqa;$#)!8Jg=*oW=CwgCs$-0 z6+&;SuBv5wgL+>=^BbwF6|7Q4u581QR$`pFavLsAFE@?$eueF*sUuDUPmp@p-ylGunO^* znzu;YxhuL1-$(g~t->P)Fi^IHi!wnOPk`Kl?(rc|yIw(Jzy5OD`PN^jy1Dv&PRUrR z?c}qfxraUbv@%Om&HUxnsJ*!sgJ0 z;JkpiwTeU&(>J|W%ogA`AUbrkZ`%xe`#v~$YT%wpVg-lK;Gmor0555MqmWiBS2yd1 z_6{H8D?y(J@G}D$AN^fPZyBpsBrR7j!!JYz+oHS5!bfj0kSVog?=;=uwx@9ie0>gJ z`9cWK~a}Qnn3e7TMfe5U2m)+MSoY4Tl47iDw z;G^@rQ+y@eLB=XH%%`5O(Qd*$Mx_)YScp?sh*p)InbwJSG0!}|KhG79zsnmLeO1@3;5g0e^tu3qZ(f6|#8x2A7Z;p`1HnAq-Z z0r8D@1p*gFHfGRA3d6@B8i|e|foN{&h(G+jW^BynMo|NataveLX?8j6{b=m=W&JLm+sh1CbP4(7v#IX)>=PWVFh1%!! zLQT(3ZhKso?R=(k8&$cwv@HnpN48D+sMVWIHjp~ooq^gB22YmVTJ29+M`hC4rDwBSFlieWiFjgMgm?q4DVzJrnNxEr!$!0-ToK(6 zE8t`Yzf?QKNj4lCvX^8szlSwHw&%;2b!gUB+o_f47q8fMifg}wa_7HozKRaPF~43) znxhT-X3I_#4f)y`4zcx}xH*Tlr_lHK^rhk^3P!s>YYf91^izAw=EI+Un|z$AB#fff z`OClgOy&Ol1XiNzdrzMTwypB|h5i85p)wifFELcS4&=P?6LXFcEph7>=aVs!XTxO0w$FI?M5WeWUcD6&mvHp$ z+Lb{)n2LVu8`DyZ_sW{$g%D_dyh*~DC#4t2>cvFuUVBG7v%*U?@Kz|qm0*u53k-EI^>v+4_{ z<26-OOI;8*Xj-#ssPwdpD;h7RS!HMYT&)(n5r+eej_fFyTk^qxx1U^HXeRyERnO^3 ztPY=D{ZXx);fL>^C0+Xp-jyJg_LpIkQ8_prWYo0n+i!5KTQ}yYy}vGQMvd-%a1#~Q z57D2p@RFY7E2Nw=;v;*&e8#t~&2#B(^c=07IJ2jnAGFf#wpu-uYe^_`*U0kMO^PyD zSXw&j*M3J@DK+(d)fp6RZHh&fXyY?niR{=BHbZ73Kky;Gy!+XydNJ#1t z*TJdP>kP7{h6W|M+5QZd_tdr~{OEOdBhub{Id<8@7!xDf48o(YHU5Zln=y-Ym7nHn@QFVHd|K>n3e>@dx)WMw8&cYv=mIi~B*`Uw=Ai%w za+_jJv5;^*BqmVVd(pOb3&Y1~nmaA8zNv$*ac+N)|837Bv;;`ud~rwdtL!uiT$Q%M zvWp1ikG^_u2aZaK73<+S^rzBhmHFc2$|WV3<(^Z6c|{0L9ThiAv%44erMx<9)?Dk> z0}vA6vt)kc&X5$@EpmL0@41ZFtw}*a{^kCx-OfkrFMK$HYb`;$*|F>npQzBh(Sf1p697V)o;sp9R$&fOKj-6``=pu<1O!+iThc#|EiHj8PkbRO z7~Zv%E`ntjJL_S~vKWy^va;%R*U&fp7&HB)x3ex)Z8Si=j zd0LxXj$KF2(-IYyRG64G1`#gtCx_epJuIFPi*_jkmm7l_9Wd2pr;B(Fp%?2~>gqR1ta~x1{Lr3xo07 zuTR$E`U$;71f8^0nIz_>jos?)Z(VRu!5Kc2@v_~I#&U3{eG3%HfNMDDpbKDizE->4 z&I3C6(o<-s$*vg@nFG3%yrnCzv!lf%J9lbom%n|TYw<#nM0N{Iw`$A+Ehj;}x@||Q zsWGv;XlA&hygrb`KP^=wCJ8Q)0T-oYa{l$dB^3@M^@V<+<~;mhI8grXHs+Tl1u8z{fUA|5$Ez%gZ=9P8v`%BWV~pQ_|HFfj)rH|nJY83iVD z9<;if8zW|Bh@)yDF7Cz#)hs1b!akrKnsozXV^G?D)9Q)HEhez{cg_@w2MyGIDGj~E zSwIBrj&L;ZvF~!~H?`0N`1;s{!4R2f9U4W)cVmH$X$8os$>Jiu3{8VxyW`c9l&0 z9z(u#-zo<6%Xtri9D=Adrs91yFTteN5&9#2az*le_CUphx&2(@LsQ`D8C;ZUF+sS> zRB++Sxz9{i*xCTna__)Fki$YBkKT8m23XgV$!LAGA5*0#}g!fj2#n!Z(5YHZU zb@b*|Lvl1jC09DjM6nCm{Z?g(;E`Aq7vu733MZuxr{~o%Lc)C>|eAuC})72h`e>v0}<1IaVUb z2FPX-yVfC6Z7Bu{{1zQq9qxOPU&oEj7U z4M6zENtKInkMXYAEx2KbN?@u$lv}p)C81(VAmO~jrcB#18Ap_m`gAk#ZO}0)E3iZr z#SSimd^o5Q5fRCoo%L#qlGF!n2dTBMJSKGJ7ZW%3qjQh+V}6{4!sa{5K^?=ZxZ<}3 zdNqN9(_C!3O=(x1(NX<=TOzuF{(yvwe+n)LQSb+e8;d~sw^7g{Z{)l3y57a5c%}-( zQOTb_OAdqRfJ-LM>!@BK+1WG9C|9zG6;1zH6Ar2jI@$ck2FHaa9%99xp0T@J+aTCg z25)q3oVpVLs((R4=|%4aZCM^yAt`rh>9+Quud)29wo}8fNswoQtpgn>6|rewjVvNo zEp3|iDuNj1_V%`w1*+I8HjZ-}w5ICkx`oM9DRfx)f!si856oDgG?nAT>UCP+PU~9C z<((X-J$c3+RPd$9_*t1&CmC(FHA`)Pu`NIYMStH2p$|_2JFPvM%FVV8imppN6e8Ut zS3x_>&fZH1$dOdmfjxUlXTP*3 zBdPh84?-zO3{V|_v$so~T*Wq%?-D!JLE1%DT3SRs zRCh5mCt816$n&y(j%+P%hvI#pY=3LQ-k$kR@<)40a7YLUaOch^(<%*Dt^m@biKwH3 zbReLlGU=Hz;3PyRl-U@Nr@w5IukLSMug@rZTTog5@hz8>)nqZwI;;J5YNasngW|dq0TNT8ZahO;u_F%yzJ&$I4Tu612&CC^T3h9a%R z6_O2gtn)@!S`Q$Cl5CT}jnGsqY6mn*n3Bo-veu9Vdu}8sU$0Ha(4%%lY zpKeUh1k*#wk^;Y z0olO31=5N0s`u}psidr zpPjed-|Q}h*_o^Ym(!Lq+GS}XIGE-uD-Z6?t>ZW{@5%k7NlYMr8zS=d@GP&kahWX8 zx!mIq#E!0UCEA8L|HBw^U7&nU59@+Z_9V){z9vCF$A_|C#o`)4?w*0oJ8W8By%U;^ z7yB*2{%11GT}NTVgPv+W8vu?2D7uyC)JG8cddY8XEokrKCury3w4%!z?oszl4Rc$z ze&8KHazm1hs0s0Q`^j@HfgmWfTwQ7W4i3%`L{dV|>OBZPR-q=;hhSXJ?8G_M)Xgne zEN&FAGFk+(!mJ<1kg~1Rp$L3y-vMa(fpKH-`@sLcmvI~_0VAxyh@Lyudj`LhW&zC# z28Go!+2jFsr#8lBAu24-l%e_cCXdxQ28PzVtFB;mBX}Yr`fBFgjGtCJ-{ob;WCDm{ zOUf=bEHTC+S_VL_^UaI(>7KB@&tF);P5dZJ(_xn<99Z@*`d~%VBqilq7yE#*1WZx_ ze<=M!@tNt&cPA%NM7+F!eEtL&m|^)VUnfqsVi+9DTfoow_IQz-icV@#1)~TjY*O+}=q{yL!2X5Q*@KtFV5q>ZMHzS7~ z39zQ{7P=iXp30AoD@Lz*Y~sKP&3)fC2 zsJ0;DAdxc*6rc$IwDBwRHlZi%lESv(3Yb<#FYmFKNWTq)`#{t&;b#BKf7tg}v10mh zKrbv8p8Kqb)&&+g8nt5Pp>D%_ytZma(AWWF0vrz$!Nj{x+!g6n^ z18{IHgsDYI_PPE|bwDuHmS=c)usly?aLm7VSYwqk*(BgofqZqs)3ZE0eOGnKgY#h& z4}LVRtBOXiFuVUnS`8WhmRlc*-MgkhkcM*s0qgH8D;7wsoW~>Bw4O}@vGn&O>myBJ zLfjRLmP7>%rZZ-dygILY=g^a@1;`ij zO=w)CBP=v+=A_L1QS85^i+?6qd;|)nKSd1fP2Zmatr<|IU;MW`>*D`|6zqRi8~;;^ z1p%<0|NkWh=jM*^(NJkR_uuj`P;mT*c$)SbCxBj@=ilGo3*=!7S1SHfLas{_B8C31 z&-wqU4x-(ZBeDOpkosSG?f?6}{(q~}KL|ilw=dOy%A>BDCK0h4*RUn_#vxS7)6o@Y zvPFW#!=nu+j-Lqp@d9#Q<^iwG-?!9Hg`VRLY zPtpO)Sq5T7rCxi)fJ3X%FU-m&^)H46smzK#cZuH7^>N9sg_hF6c0r;SL*XFl$WHQ%jkxFF1#^G+GC8v{v z^PzQ%HmQV)F3*RPYm*>~D3hISVvh)mG3S4NkLRX~>D36@vM@6t9VirR2TJ1EjZiP$ zK0U?-NFQ%VoQ;RnxRt(zw-4j;v=9781NJvf3C2|ukH}>5`m>e>QCPLfYU9me{MHMY zB@RK?*cNDbiFO^oEJFVEv@Bx5HQ>JNjMmzO{uPyB-OtOj(ciRuxSQ_oJPFp@94yov z7C5!ik1ec>=}7HQQ^#g!+I${zawIs;A4=IZ^l{clXAru!m>o)naz1dB%fPkO)Ub56 zgp7>XFyoVl2h$f5cBWu2^)x9K6>X);dYrZu&dfo{R0hO(VO7NuQGBE-_ef5j)n6JjKPuhdzSUwv_!w z4iRa@y;^?lB>nu7(!@d*Nt>i4kFpMf|_cFO<$ajM!ZbMmQVGw0Vum$h{)tfDrK6yw{vxFnp$+zo)~ zCw9Bc2e18A*raw%o4mac+yD8*0WfE}-u}PMt)f&R=>e3V(;NY0Ygb;dC3|r)U?(~B zP|7DGwtMT|;w{?4dl4D?S3%Q2&v~V5v{T$Ru}R!E+1<3J|IJQrV>)de73ixfCyOqM z6EL|^5nD2OH*uF&%86vfpsD1&R>vhl8IH4>j7*kZ2mZ+xmZre758TB3@x#o7Yr*cU zB1)KmA*a{94+4LD9=Qzjn9ev876ZwEkJVUP_=4w~EwxdfRKLl6{XN897#z}a-s8Ev@H`Bu@yTf#vB5iex%nGs<$VavB-RNChqOG z-elx+9?`fmDfUTqT*$KI1~R0BvWB{3JJrJ|vga!_EvG!+H;H64mxr3mmNs#@ z5_S1$q1|udH1xQ2RH)MDBoBGhqcVQFx;x~krOHhn(`wIF#jr`gU{2a6{nD}eB~*}| zAatSeHCXs^E7lH9S1qurB#KJ%DZ1r<9>k3VYptoH9Z_ zIv6vL?)R!|cApECPtrX9&n&5~e+BPO5{cA(0rwT>GnefR{;^qX3`FN$zu!ID;YDhN zlP9A`02srES2cjEyzZgnzF3GPF75TrIFr$#qBTJPgWp z>aTe9D}Q|WW%^LR%In+zy1ZYVoWH}*az-!u{*xgzwyQ-$PLSjxelwfYpor``#zr-p zol%gR3vY8r-vck(I2cxkTG^W(PaNL4tK2Omb01DkSsZ?93hQ=Q$^7q?C%14{-$Sa2 zVnZm`aq0qr)!4n^Qk!h4(U%01w;?A%S=ukuax-g}*wCXk(ckY%`gbJK>q z_~mXS=3Gj!D9S3mnn>ty#uA=9$=Wz|^7+u&q0t8c2;5uvNVO?*f<(aJDSgL9+UdIG zYYV57EL}tGXJE0|T*W1DFBpvcA3l7Xt)Aw$U&wdfcJG^mHm|aScJYg$D+lt^G5*PU z!>Cn9-iJODi%H{V24^?Z@Ttm&IId<1SzA0Do=Pb`t*m(K{Tp}4+xjLReZGbpW2Rc` zqlYS-({ipKsHcv+xOaPTEO&b|Y_|9#HdHv2*FbDmG?!M%0A7 zNwoE@?H^&6W8i;!5VKGpKfX`INK<_MUCiOs+A?*|@#ykCyAR{ju7z_Y*V zL#w`%Ff2b4*l<+3`S^d`i>bBkR;4y$$@dN(oxW$aP2VbM?-?r@R8%5a_N&K-+T~n- zv#-32Qb`niDx|tq<_p!!%vOOS$RPrc4^LFP?wY&?&n{(r&=f$Ro+1~igs;Mx2`i? z##d8^_U*`IoTLs1EE0H`a0V08{05|a+MciPv(LW^fh47n*`eno)EWxSVad$QVL){X zxlLEBf!*QfvaW{`R|AW45b$vE!tAhTam-v1_oj+bxAn zTb2Ib*4EZ=BUOc4qkMFw#l_jjewa;#TL&uK3U(<7aFH)m-z-=9*%&isnkt!@&2Y6^ z9XIk^m)I-(tU}A4JBc9acSBHAT%Pgc?tNDw>Y7>P-HwGeY#c=O(l$7zOE+)3uxw+x zVyvP94oFf3B_f^dF_7xDnEisQP@Th_`VZjZIXDs|?jV$RY?%0AY@5?wV&Kd?F@dGO zlsOycFdu>cMChev9O7>G*ob!b4(+^%@!6#)=RmCNOZqs)fP7-%c(~lARe-`Yf2D@I z#gG$xeUe{vG1M`$K48lyG0)LDhr5lwI^~2X9^F-heSx{JTDwUJ^qr8yAIKJ zhBzW&L8eA&eVNx+S@PT{#Z?~Dzk5HOE$&UTb3dyE1{w@U+g$gT=;L@$Z=X5)HmTAB zT3g95C0z?ZBn?<$U&~lEe-Dh>tiyV+oySv*i`Ok|4umzrg?0R&eO`|DmFAbK8jDUo z5<%Npd_M34Rovm&{7&Y;RuA+9-5nh@nB4R`o;+>^O?HL;eQZW0Ss7Zj;I_fuN-bw+ z9_|q%P5tCjGt<}Gf&@#p)&|jz^Xp(_Wvu+}O68{w9fxfzjVzYr8t)i?kldw>uR@59 zSrDXK-;M`KB%yuYET&7Ik)&uraOO~QAMFlm?YI4I67Q2TTOn*~mkAWkHxy$7hMIYy z(Zva+YWhMm-Y@;LtP-CmjIn$`j8nRBdBmMQCQ`iX=fcA5*_IrtjcAwm zdJnzzwCpt@qvHAsdpsV0HZE5V=KZ4~JH%&hMRQ;7-l~Ie%FBW#&=*})@X+OQ%dy&? z7w1Pi+R-^ZAZ6Qsy#wZO^|0kYbR-fU+bU|Zj8RIJtYT(Y}pLit-1m=b#(wL_&34VLdTxJfhuuj-- zeQQ*|sXVUe(jVm~k$0lkiK3tN_p$K8-{}KwLyz_oG}ntXD45l02UL=Mdx__&)r3!VHc!k88u{LWZ6aKTs-~EhUhRx z9*E{L?wA!`ktcBJt|TENTLt%qV4h||u>59)KzB25iVnGPWY-tsNyxZU{AnCI7UA@IR!8pQo`@x>e{OUqqKJR4-ZpC;L8mwYtO z0z+g6>s##t;9Hr65Wgh-iPkz)r_A92et^+sPxh?AcQ`LH}yy=dntCs~GnH<8jNV*e4>-N>mY@*LKm) zHnTS4gAd~^x06=GO61TV1pA$OI`|sTE@D9rd{GqMcoZH}lm`y>XzhgrnT6TY?cY3C zBM2^R9LqO~z*|mr_pfy8TSu8f5jkR{6$zt0Z(2-mF2CJV6s2vtCBEqQy0N9GWAQ6Y zyy}KkQgCF@*b7kUa2e-=>TThs5-cA$3Yb!daYfcRDC!-gYnJ?Mwp7*WY!(&M#*f1q z#pRus1sqr^4=;|;Fqo^12y9O6VO6i!;W!iG<_OI4!F&m8dfpjr)2BR(=3R8_-}|C< zl{Cfl!C03dM=cVTU~psaj)ZV}_x*8^FTv|x!(4I=9lU+x#J6v$icf8fxRhFXp~}oa zGhK0@nylyrK>*0+M4qcB=e|{!*98BBfHn;-;yj^`?qyJ?F^Dg4ocb=6N%RL z+;luGEvRQJ&yohf1Q1o-W@|;yiK2=L`{W$ifEJy#fKFN-#oPv~_|llUC* z`ppTCfq<=#u@z81(}%zXJWj~g6i*O1Tny5t%RWO&3EGdxdOli$PX;)QqU{LTi#SEKMMl+^*>R)aKnrJRaUTxSl`Hj)!0x z%LBIX@ImMtuDlpT_jKuk-Ew z@z(^I4SCv`bD%q60fL-7*mvF&U~3Z+=$_Ka3%sFmp=(GMc?xyhq)|Rk43zDKOvPrD z=0eYj5Obb+Liy*TyE?qRz7=6*==G{#Z|=7K%UONEWAvGKvErp!vuF9n+|}x8hL0Wz zGdw}y1>Jk){(`3WHtDfw;b$ekOQ$&#g}PsrqvM-Ms=|vZzho@ZfTtUDSd``&|6grJ) zNmN8e6=7@0!7W!Tqa=lhjxB6sn+nTH;#JoylB1Ojk=)I~-;U+P^d3VpaIp`0`76Tz zM6fsMA=AwJQcU%}rc>aNfjxQMFJxo8BHom(Ai+8XCZwKpPRO&L~SHy2oG$=lrNo|lVji_5DF0>u_KgwJyH_J zp64(~Ea(a@jKt`c%3O?r$clk2y$bNks4V5f`wNRtc_jAyJQ8#Q;BgWoZ;R(RF#D}% z=y=*%5T)3^|FUPI%kCPcd+ME`ZoKfu`9ayLnqa@eIvEs5h^4kuU?|`7O+Ss8LNEb? z6fA_oK&oq)3*ErRUkPI`BQKI~|6SHu98X5_m{M>n};uwgUJ+$gJW?+CLLO1|HiVr%M7_9J6DdCniJ zy%7kA+vFc3#i3RAG~ay936A`YDew_8?bq@uQ12KS_qo(C#Kq`s9o2FPiU7Riw+s?2`0u3Moof9;%jpI} ziZ!T|i&-}Q=7=BellGd+Qs_!7nfzW|t43A`OL8q#kFkcx4(tMS_&mQ8j_`xbF@;d9&jO=`Ff8>etceAqLA{{>s@&OcT0;=g>dF~_;?%F&fG)S9qbBH%Ty14=ddL%|88t5KowTCNIZ-*&I_8M3q`*z#6`l z@GCc`*t0pBbwdZ4Ch8Zk-O|jf;1x3Cx}^xST$@vRU0B-=p!vN8gIBP$;u@xQ5F6^< z_6)GV8A?QIJt7Mk$BRQ*1--y4-h7je9OU0`NvKJo-p*<+=@*k4q6!%`KV}13VFRF=uc-y_j z?(Sch+{OIx7P=_Ms8~?}HS^_!Fm18KB}5;vP0I4SBG3jOof z6V=x6PHnR3B_dKh@@RQXfa<oD(UjG|d@u)nk29M(&BLzSD7i z{SvzpJm2DOIMt+1Z0lQNu9&CI&`1D*OB+MtKBJ~Il%gd10J>|GI0aLHs-mU6E;j6V zhEoysz!y|*&b&U+!kb!pqu-Ts9z6LeX$reJx##iL;NfB_Ke^w!BU+(9=6QPqmpg0j z{QNw9IR(1r7yY^C6b9>w1C`*+Fhl)-YrY7_veXY_BL{LcR%A$f_k*5;L+5{43meE; z-vKBVn2V+gi(SZqJ* zHerh6jqvN>VyyY=$crbfZ@>oNQIdAG0gD}vZWO35-=}(> zl}aZU9~eQQnjQx4@&(-~KcN7!=zcYS+=yjxWF*xvJ-1_N*P*x1%f$7o%qF*dRwjxX zMk=uZ;8WwZjy?!+$jET4oZ9AnC{%!}qUt5#__o^(?|o}8w`q9}N*r;lf(&x8&TjWq zq$bBLv~CT`ZHpYc6F7@B9`~!XJ_SqbcVGuZ@Zm$#M32c0{F~@rnOpLkbLD7#XhDhYVQxuT z`>JIm!m3jdy54JUe?I4iEIQ2TwE?zdhPk<2Zl;6&AUx7N3 zd9thOC*iA5fmmYWAl7X$%4HwXH9;u-RvJ9^#g>X*JQpBD%Dk&=;jYr;eQTjir}xVk zvG@eZ(XY;N<1(n;dQIa2?%YtbKeXN*!nEVAz+-FOK=xcBNDV%x`3ia`_^?-uutjxKUFM zsx`^Gh(-qd_P!tQ)R@3lTyV`+2EkUF#1%Brk&F7b^V7xzGG_3@iA~74+_z)Gd_{f5N zD4s{&q$?c*KGIq&J{v!ovC(-Z_y!}e)gq1*YNV<)*xwD=h=>3PBb<>-SfY5HBAJit zd=&|*Ek{YN`Jtji%P14%u%dxu)&t;8(9juh3?)^us#{|1hl74x-?e0hJa?Tc3mXj9 z!$x^KCWpt9Qmyne+B5DavL_y|bx;1yn&`EWH2oOEUzsis3X$-#JG0_Nsrn>Lg<@@i z`eufYQQAwymHt@u zHx19{uKJ|T>Jshb%DgL>lZBP{%#GJ3o&lG=XCTL`1^OnshLU) zc`X7V8`jFgY|H&kD=k@O-$vZ6_k+%QL%;J1FsEB{+~;~l=Bg$#W5y0|W-k%@|K^=x zD2dx?F*FPe)os%`s{v5e)nCg!M)eH-u=lcxdwAyrLmk#PCO6}0EgB8bnNS;%+-x_c zM+ICO!&eGiq^PP!K00r_1Rq>_knqK(v+0dmOt{&tfgMV?+5A8@-~66#et390#%TF@ zmtk<#_VF)VkYci1$l9daZ}I^=WcAab)ZACmk#lb{fNZKX+^amd2?gJLE;US zX*|+Qg{rgPTkRg;CVLT4Am^2sx%Z}>sKxVbS%LG)PH%)e#I0pn{@n>7nS<*psT*MF zeF<)48SYJqn8p8Iq~9HCej&m0PK6K=KeRJzGd1#cXeiczn#6W~#^os!q}2UJRUc!B zj!UE*};5%lHz*@o(Eb}8Liq0U;-Y`Z;{=vhl7$+ng(SaNxQp-sXWMIJAAwd zR0BX!$(pCS7G+$nE~p=z(g(C1CW~J}Znhn278lm>xi{q9L3ciHc0LE#SQ>H0A`18^ zntuhTs%E<|-_oK|n7`s%Ioi?;bI>jr9H~V`NnfWmfBt!~>8ZAsI-HLd0qu`}Uf;j{ zi^=f+_V4euw5XVpVmC$H1ue6Rox6a;P`5=O48dIs1dFhmR zmTzrQ1%pwUAr2%5$}y5SPVVpxe+LJbU)%57Pf0x#rv})6O#)EOSRqH3%^AO{*?0Xf z$uf!)E$6_2OZ<J)zG3ZdNtl(di?v6i+-ioD30uCamYZ(;XrJH;|H#r!(@hS(KMI&qhZc zjJSb-d+H%7Px*FyI&N4AM9rES>%ZnV4zQ{XxsSN!sXDLRGyFVYQ*X#Oi`!D)+id?m zLp#F<%~TinY2SQL+dB2l7@3>JZU>kA3_PAQoFGlKeMW&fYVU^+r-AKR0(b<$^~hKK zwe&GsN**Su3S}vGclLZRFUd*RPW@%;^!%y)z`y{mLP2!hZp(8y5d6RzpVjLhdjHk6 zViFaWMLRk;5Sz?yrI6xz^`v48o95#y!0=U#VobytPQ?!~tZ%!7pwMFb3UHzrzmi-m zqXK2MhHW0a!aa*S0BgJNL068u2~mrTq4)X9u#%3XR6WDQ0#{I#afyMR3Ed_KADeCh z(mIMF=>F6NT+EH$-BGWtNV`IoMx{W|CCTWQMvk|fd?s<^`gF5VSms;~ zx{JYQS2l*Z6soKqEod5vf=zvnh&Hf!aH%>OKc%-ENc+j}t5-WQ$JJEb!a+2$|1yD$ zyg#0>NH8heIckb|cRA30HQNN#hB4czpsUNjJF5ZLPZayA*!kH#5u6!C6L9w{%SA6V z2Gh=I^EEa{(6lfc!qq=j$nPXcPexYh%+oOVdA;TR$7LOU;75U*n#mapq7g=&8SOvj zvaY&Z05$4WZXNI$@W~tlRW|4V*!~`@!9BjB=B{lw1sVW8v8uTvRXrMly13t66PzWK zh$de52xi-`>voy6c88MRBa8dg6CxbRpWJ`W=N&}~tpU6rv?lrsvM@S*ryTFWQ(1=b zO4!Ou>Zb+ZJhsvLtYbc3m$kN-Pl?NtzCcYYebn}$2~z}&U$vXj99UM-+OnVGGHb?} zUXRWlcFQv+efiSwJeZ!@CxH;o`JritgvNte7CA`9plm%m5<}L56@yVbC^X&9pf$@1 zbZUHajpdQ!Mu`Pa`1Em_`gRndf~uc+?fKz`1?JHW(umHFIyUIx;&78B>sDDXS$52H zr$xM*vhAz8(x$3PrSBjk9`8e+XD4v^+p8AGaNI+GU4|cDr=INlL61p77A9?8_2rIM zP*4D1TpycU;?#bKwtQr>3O#o%XyrPwkQH;j{X=Pa4b3_0P{*=-mkg=ZLE<)U+Q(e;FKXxb`&<1R~@ot03LNFXA^sKNqrqFV~tLNq@I;_Ygr-{=R7{nnVYXGz4IBSU?T25w2OL=O=p}$c+1($4xZKTZSq{Q z#e$AV8oYN1%7m1a?%#U6^{)SFc&}m(gSbj5R3#y#ig2v&*$leme&Jjgx`~%Bw+&;1 zH#E^qk*Ou-lO&&V#voU4blEw&Je3M!6~`FSYRNH#pOf-$MehK=qXEbQ6zm=KX8cMI60(Tnrf(RH`40PlYH!V{^dBLawL00PQ%_S+er zeq1J&5z!O7wm;wI44MJewl{|9bmC0#u$mfGyxh8V&R+}7ruZk%{5GRbh!|IAMuD6L z`bsscR(>~*Ild+(C3B0Q94=Jh_7Ekx&L26G+?5DYy`HAMSF6)82cS9drinK@%b^bj zrLNVtR@<;8B}@*FK!hb6Oy2rtc~RrO&g_A?GvP%uXF%h`$fy#oyNp)@37YH$p^&FgcWq2~*)qR5B7dVY=Sf0_g+|}w zD*`gUUIR~9@%UgknqD4il#r*eAWqC@!a@t?GsC-^e#h!@rE{dp%!oXT`Q*+^xDE6E z{XSR%OOIeqUq2BD?`pJsbux1N6GSy{h&~HYVA5+0k9*oqC&|zFPd{7JW0((fccItn zP~C964cc-uqXAng=*q&TdtCgCHZ7nZuAL;@t+(*KN1d5@q{=Y!I#;`J5!00&j|gel zF!ghhT(53KF1M$>K{RN_7y(6tggdxeW@}K93Y4c`n_aw#P3k-Fq}Un|K?jpT+)vI= zEoDZ-YvU&rTnw(TPXycbl}>m<9xw@>S>7Kn>y!!ZyPWmWeQEa2%9pj6_^vEQn<@d^ zvi)<3CV=r2NeRm!Xoih6`P4ULsM7ecL*I!p0m(G1Z{1NiYPSuL1sYVV(S{gsU>(rv zr7uYRS$K?6pAc`D+VNnf;~wsew~6>fVj!c^1aVe+*iE3Kix2 zp8LX;h!=m7U=_UW`iI^eM5L}w+gyI!j)`27yK+))YB@Hs@z#i46F|=UWr`FN+_j_e zHjladfors9T(EOUlPxMKo7v^<8}w9sd_D*%7ZJF{$vxNn zzsXMRoh1Diphhxe7<+n7$~jpceil!HAtvru-qX1Dl>>ND$-?>1CfV*v5uEVWpXm#@ zP@xJa|6s}eX@kFcxwps^iqEqZzYS4A+&wpK41zm;<5{I@7FQzmy_!-goYSgf)R3!} zT&j^(&g9Qd=?gMn^g<^l%Bwqc(7dS}pxcn(H{J9}b|UBtDkeWNI0^?=1 zMvA=Wcg}HU%7D%UgAZ49=p}i23;-Rol3r(LWvL#|3h+2KQQoJ@Ym*r0?$Tg4^@gQT zrd4n!vU_$Dl=srE$v0L1f%qSaDJ1AeFZw+1=#g2YG+*c%l2pF2r`TfvvrNo}-S7)? z#dLvSmE@^+njq{A;+jP?0rBU z)$UNDqtq)faQ=r)sN#g4O6a2&4<=@X?r3qzGZ-IvVfka1mvQ$RNN3e~=eIIXGB00F zsT8x$dz-zcAM%r7o^W6lRxqZSw~V_I#s^tISJEAIxqpv{B;@Vhmjd`hV7vd z)s3V5kEJiWANv71MjO8+ux-_HIzO40@VPGBxoduiBw>SxUu^Nk?>DQ`uYKL_dod)NGbB4eap`m? z3yf)8yMNp&O7A59SsiU7r4i#6;4<|7|qM(D?etfhv1xTBAA%hNB9vCiR!P>+o(eHyv5 z`kqr2sq&?=%VLBdGSUj1gN60F@w>d|mC%=#XpbE*;$*q4+aGpS_o$;cz9bNv>&=r0 z`J*iJPHzD&HPhF2MT=$bVl`+Y&D<&WZxxYva-%^58=K$wwz&JKY17wFA8Zi?&+?osjK?+E&G&=@R-4pTgq4vXND`hfW*@__hCg0(?|cJ=Cj}Z#f%GGjBO|xNWo#a!&IQVui&O(R^rCKBUP>aoj#957!U%-Kqv8cq9VhQOfp%w1~_Vup{rVPY^fhsrSokdf< zs>1PH#Vmz8?EBK+bhZGqCDChKMrwBoBfs!ru6(1AHCNUP8*FY9)h?W)#pRdR0)GmO ze@0Td7GLo=Y#&g~t9G>>Z4!kOM;INDbcpqlAQh=NCP5la;=^S1zObsIN%k^0N#6h= zDx0{A1z8Yd@p_{Dxhv>%LD6@Ry_*eW01t`zW}ief$Vi{W+VCMuKK{K-7^%!JX!jM!tQk< zO(M%{r);X+cdNr>3t%YohckOm&YwF`j2J2cbOZ657W&q8HiEkGEJ8kp^Q$o(rd2}} zdv-^bOnUxXP+9tN?gKFAYHMw!PM>c?uT~rm{!e(p^3E%~V(hyDt=tmOc0;i!(^b zJ(HbMq>~u(lx}n0oa~hidSt`F!5LN2Gvc`Y3p5X(H;(Mvw2B0jnlN?!E;zG^uh^!Q?6J1^p!9V5J;|H7ws z2z>oBp=?&bM-pVkJkInkEMr76N%X2cQ}$X{9LK=yGyX zi>=_WjRwQtzo1YvqZZjE;lz>!>0SUV|4+zYbhe7arfMWJ%Uy-jA(^c}GOUSr1Wxg> z>X)>~u!@>yUNYtcxGTm6b;I+k4YEbR%tBu9m&x$zH`j=_2Fy!UwskTmp|@kqN)_zC z=Q4LyicYPEdGh|Tr>iUE_^A+;H++|ADX!a-j$uVJfj-x#R9(CfmdVy9c6}63z6KoU zUsXgi&WMO1Dumd(f=~~(bWUgZI{61^-ta4o6sul(WJ<}TB%|a8xU1dAlC4y%qbd)h zkA3yxIMbN9@mIDk5A8fbB@i3+ea+(x2KvxGTW+i~^>b9Vlw;|jcn!+<2ZhnW)|($f zhfjc1kZd0A_|G(Vf_C}hbLH}z@I1+OZq4FG2GF_nV4ZbDKOdud#0q8BXR1IS#B?$t1)wWg75y?7JSpuF-i@%j zdHGG`jY}{?D7I?a+TZ3%>_5ek7i|tw zMrB2S60|JuiAg4$>GcJyuerghjfnc@aVbvKWJ1c<==#E9<=oVL8d~k&W}q(?0=jmj zn}C`(jIF73A@3`#>=ZUtw=B-*JzCRq=#x19~WY{4d%P5+b+ZdITWo+zcloR5`pkW{;nfT_{R3`dnCP~*?*+R!B$W%cJtF!@BPl$ECGpEP&dlkQ) zaqyoZ;Zj__0W@YY0#YPHQ4)z08S}XCm()C(-m{Ofx9{Ze0#sty6w(>nf0E|014EEegZsUcnut7Ir4fdA4W`CIyc>`k_FimVq=oA{(#t0L zI#U($8pg@4Q{K-_xrv>+D@7h|TZ%#n7g3K?d;6=iZvPUvQ0J72mXP>z{WQkaiB~Ip zmR2p~K(Zrx-UD3cl5Fi^8=8$Bb;~wP!GtWtj5>AgzLC-H`Hc^HX-w^_b_*(Rq~~33B>=HU5VaA!D7Mu)Q;7%T>YW za*i!`{CHz{O?t*`;mDD97eTY#N0sxBbmB-dcSwO7a9lrpn_>>;Zz}B>E^V}jX^an^ z9b<5hlQBILe7gyNn+SvI(-s{GyC(WqE1w=Yetsc`F~0$*I_h5jOZc`l$9V2UT+WcW zHmn6SYEL4ai6Q9{dl#Co>K%Dc0CEr-M5>GjNDG**A~h&+jP2t;kShAX%MG^-Fih89 zB#D18D8DSg$1WAGe8e&A8(KfQGyM^ExLu{9n4nf5D(D z`PHHYVCK9jRP0fZRB_R$P_@1htNM=hg!+F<6@VDUeqwh@j_5QhgQ{*?{xOMF7Rizr zkhB{b{wWqGo`@1Z%rkwo{rF06=qvX_%x$FsBK=6|TOJxq*iX{6J8JT8K+nt}6-)+R zM5q3zcJq%qWfza>I+&^P-*cQu18RobUR+JwZclz}L;yxN(A|x0Unywf@i%b;kRx4k zr3Yek0y^$FYm)CtDuUGvZ545y1Z?Aj%k>kpG_B4*FEa!7P$9R*(KZHdPJ951%s5YU zhe_211^=Bzh=UZvzDbk8|9ZcM77Hk5yQ!H8rSP#sAXfkaqkVwHEv9ZW9o*h-rCPHX zb;g_g4@0Xoe{=v;O3vdXwm_}=PR z^(Xx0^s0gf7C6fsFftj`m)W|gj1H5uWx%4culn8VUEES3EGOP@Mh21F6uaZJ(*eko z931f_PGfm34!@ZlN(Wm7Etmy)ACQ+|cwN-`2_#S{Xc> z+_DY$+C=BE5#;QfVAB|=ebt$+r4sTk4I`$quNLcC=(Hy7MoQeV*nCb0^io=OAVW9R z-nJwBoR0M;7**NNwS;p>n|f%jQWf8=;(ALL?s^8*RAh*Ja~b65tC@mj&R1PS&teXd1TJ=hN2psG=om%L!E0$ z)IMq0)Z*VaIW`{UamUK{T!m+ZFAHs$^}B|b>OMi!jzMe~ijnL>eIEMzW*)BK0Y$+h zEi+wWZh83QdtkS}qEm7eZ*A}XEev#|e;!>b8VHj8Yy28l+2LLr`X1~3;EBQLFu|*J zX99|Y-Om#I57yU!F@6H{2c|ppA0b1014-kXlD{vBetQYzqH=|qV1kjm?8#4%tan_y z1;g1o3GhdxoMrH+H%8WylhnHk2S5|~K*6HA5cjs&pdUX9I@PhLgB7UF)*w7wA6iOn z_UoX2w3zrqsfl|!wgAczAo=|Pd|tb8nf-KyNBA?qNdDf*1SBG)z#aPYp($*pvQ)~KrE=kJi9NU9?mc*TQO6|U1qrjt zA@iwoD?Kyl_uSp#jDM2&CeSy`*joZic-1&L;w96`Nq`KWm1P)CY-%18mG)Y;_ernG zRs5VBQm_J&d-lpZfL>%542T*G*3&DNqw%@qa z*0mH)+LX1hi~3FsSx>gYp1Oh=ML5a<=Py-YjkqS^(k!2FP^Whtw(}@!_l^VjuMNU@Vz`L$yg8fO_U& z|2rp&@9H1ad5%op_uqIAiZEaRAh3}XB2!#IzsE42&S4(h7Td5?D%9uULl+=6GxP+6 zMW7VR@VQU5jsp8XA`Iqx2YFk0frn`xGNE1Yi~Qzq0hPZ|xEu!HB=a-Cq@up^YB#1a zGo%^F$Y24);feJpSHKPPvVivWLQ0Jj%>fY2RMk*xEY zJw+kFV}jSai>e5gQ0e!74t*5h#RJnXL)O%Bw|lzSL|IMyw?`wc)|979usI6)tlZ}V zM*u!hqK00WDT1MefvV-SJ)v6a%k0kR$m;&$HBgZ69dNW~Kwpox^Nv;dM3?OU3j9B1 zU1>m5#}@Xv77Dd!6-ikX*9wYkN<#$~cpwUK3m6uI3KAe;5rINN5P3#zAQ}{^D2obc zSb~5gfEZVV|D*wRnSZU>MDv zseJ-GEE+rNj^S%KN%2{%!qj?v)@vHAV!v3Y6$N{Pe?~ zS_A?yePX{~c)sxY@tU^g<~BI_L)zcln^T#y`tPd7#^hnc6-q_nMRh3`B>EGxhpWj- zy2S)5c;&@5@_vK2=ka|X=H2q66%hGx4I^sR>(vZ zU61T}(`sD3_~MUNJErA0I}*oQrf!VK1OI)n<|3WY0>&CL<|KJy6mq)|(mg190QPrB~$3=96+|&z|7i9Jak6bYHtX4G6B9 zYOYBk*CWV%Q?)iZG&CmcxC|UqyZ}f~lr<3b-kIR#qp+|wCd}vC>-mu+SkQ%&ppb<- zoLj(NZ!o2DU)W9+@eC%KR+08Dt`T-~*>3B>d_M`tYj0^@{+-c-g^|@}+b8-C2%Bj* zMwaYIUbTX{%hb-qP**@F61|2T_)1jfD;4WIi9F$UiqyysO;5UCo5eKF4}_2!)nayqb$X# z`U(AC)1U67tQ*N4kMKSAT{Yw9b@X9Mo4E_3Psn zKbU<0>Z#J%-UaDJ_Qu5l<2f|o{-Aw_(m5q=Ow&cCH*~Y=aAz<%bTN25ZR)W15fzShly2`=nmLjzIohk~`PMkNWcVadk`PdDQV^?^j{x8B`k zE;`t1EBC>|xyt#0BUmgymKtuE3I}T~QJK2BalP4}B(VA54Ff%b290bnc)_Sshu(nx zD&_iu@$ttBfg3Zu=HSi;1wPLiU#^D} zRI+@)CW-s~*`iBHb_s?1&xLRM(&Nn?B<%AkX;B!zmZ^_S=Vcrhlp)w#z$!T`&cO02 z!oQB3&dLz}wKjHXh5>-^1pS_ES$&9=a+62E)?--5F$p>!A8l+pgK9$^1GZz z5}VUxU6bq%t?(Hvf_p=?%CP`*fyJ8JQ;y}Q+&y)|p1wLTYC$9keGIsrs2lMzE$HvO z5%2W*V3n(r6WQwQPPZOOrz$K1i4ZFycg@IP}PU=}7$jJ|ccmiVZC93t?HwKe_{?PZuCVr{DziB#)YX%!Rw+;*Don!JY0%>&nmW5q?(EIk0Yzg&@g!~^@qTv}Z2vd3cTD#vvAgTuz|qY&f86k(r3M*-hw zf!T(!u2!`mk1iP8FV@-BKLLxj5+hEB|8DXv&$0NdILocv`Nn}C{yK+AeEi9QfAZa% zKw4P|GIAqm6(?<<#k>+%NnP=-9$ppD)p>~@4;|W@anjE!I=vNs=p7rWLgCO$y~j5^xn7|snFAUL z5X~vPSEuhi@VZEPGTnBq7=kqwA8$c(p`Ik~;uZMqKkHD%VGlKpZC$&s)qe*Pp(dZz z$LaSwdP|(yAbQF*Bua~)W;jNYz!x6fS!Wi;v+l+hQ{HA;4@(n^T4`xBCG=WlrarpHb^S6}?z_q`YyV$!`Dd9DFYXBTm0-btyr~NUkS?@zF zSwHqKS8w{FDY8V{^BW4a8sn6eZR4I*zxuqR+_33Oci-G-+f7=o1N{6t-)FknnSb0n zNLjt{j{DE0tW>A0)XdV((NUHk*S1D;n2aKNb1!vJgkcwDuh#i?81KHbv^%}jOJ*pm3VcdsY5mtgD<(*%_YghYjFu=Wf^}n=xXO7K0KM{( z&r-FYvz3QKy-tMueNr(Xd->h!Rf6dn@^c1QIa9iOc4jm5HbdC};^iMUnOQ#*woZ%j eH(loZGrpMYjyi3+^6Fj$dF`>ZvtaLXJ^O#yTx6jD literal 0 HcmV?d00001 diff --git a/website/static/img/docs/deploy/example-repo-caching.png b/website/static/img/docs/deploy/example-repo-caching.png deleted file mode 100644 index 805d845dccb2f9d979a12cbbd43cb79858090685..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 47561 zcmeFZ2T+q++cwJHZe^=Mwv`A1Hb4agML=q{BA_6>ccL_bfb#o;z-+893smigB zdmkGc8^^7iH+0$9*#BT-`~A_Mzk}askI+=Wf4_O?s$TzXutg*Y{IK`_O%o3`w*7~9 z{{6<5lmcaA!@JzNaZTShWquf`t?!e%%$lyxvr?dJlcljiw;AL-Qzuqp9%fp)bG||a{t(Cxq^QF8hWj<`jv0fe&5FZe?I^Hb*}F&oj*)n z+Og^PxE*>YHXe$l%G|qnS}J8112s^fo1>McIGl3^&7APu2=c(fc7xS{>%qaL`s2($ zfBe8^{iitO=Nn&-LeV?B0@w0Bh|+IAUi$7vr~i1xR`Yv<_Rp7=58>y2zASkse&ol? z2S(P=9YKNXo$&vE+y7;SAYOyT6ICyCD(d`m$`8i!+xzM+Q9hIAUbE;aFVg<#mJSr)eVgQ;9$%c?sMo2kER=Ps7FrthWm zK=`!}2*`{Kz+_^9hOW2ucj@0NDwvah{CRor)K?CbNw!JxNo$Uv58u_B(F<0055eO% z`m$GA#!`knApiYyZ#{4o27O-~=eSjtp}mN+7ubbE+Rne;d5pM7bEvevdcjyvOS+J1 zJJ5akmRaYbivB)Q5+qIMz$F=!xzV|Sg19^D{DB-nFG?=hO)D;}n`QFuAW`C^V<=rc=>xC zw2H+|!o`bGL8!N)pO(9Y8noXXk-rLIWOOEw1=9%w>-I>o&u03fd zk^&j=-hGYhw-L2VT+Cf4eTozk)9$i)^7Gk=yca??+AQFz!L`Tv1v*`a>ocm$_JiL! zaA!8Y6KKckRU&*QyQtJ>33fFtz-=z#D&k6^LO3nvFc46th2&!Sc_J;cNwc?q6rTWl z6LQu^4_?GumlHgK-*}auJ@QyCHs^Zki-B!U$j(EoUqC*KU#Wh30|to;Eh(?G3Rc}t zdy0i9DV&s`PtAJd)u=Uf<>nRDzfASpoLK=v_oDDxiL6D|BkPyoGcO~X2+CWt0mjC$ zzVhb<7To=UvGi0P?&ON(HY=#0oKZ?g{0Lv$J9K)4oxLUMeO~6o;hc_Pe8Lj-fTuDH zqER0`NxrIAcIwO;G3rSS-e-}(=TE}d=E-bO6$bGqW*Ie)?CeX-9M>DP@o_lI)Y8p{ zruDTh1$v~zKx-=2-qM^fdsSwg7H*DcmmWn!Hb>oK8cZylkh4$UR19&;yXGZV(GmjM za{ElpEOjJ(W4a!NC}*Iyre50~OdT$gKebu`W$yZ}nll~a4ce5&&Be8@5gaRMN{r}v z$zo|#+Qp5O{iZ6rD(}{@x}jPvOG^}Yd5;-7U8ZsoT{0{$V?FZiO|u>YddIZDk^iX4 z#Ft)W85O+oy&^-dg@~zcMbtM-X2U}TzE^2L7zR#Ko~)~cQRWY)B~TFQ^E6E%jYv}R&w}S}Q(mOFyR|iqSC1-xDD83KnF~p`@343&>GO=F z;k_g`?|pd7MDhMP9oC|k?NniKmTg?1{F@PUy4p~xFJpw8o4IPY&|@Dnta*UZ3cG2h zr+?@8a{U}F!Y8dv z8m9g=a-MTXU8W`m;kndB-%7vS!leblpz1qu&R_khh4}U?5p`J-NqjwHq0|fP+6+fn zjn=zWCLbM=KCSz?Z(g_DdeA3u)W+{dg|ngQh?gKHtK#~_V&n2#Bx_Q~uCnmq?h@D0 zJ$pWN&#yB+)Gu{n=Mhk16P0s+s10F1R|$kF-fL>GM&(zWAGf^YvoPRV!jGofZM9-) z$h7TRzG7L=318R91q0wmxfj|QC(x;i$g#NW z=MT$|agfD5Uo;;+tra>3d$zN^2aCXX4Pvw@S+{6@4*W6N2&k;wiZMS)zA)dgLA&;C z;XE!!OLbM;)BRlyI9Z;`rQ+hE&e~)c+;KTrJrV40YRHxbpIJb14x!Tnw`G=_{r@g} zHZ&nM+@tyD0!v(9>b{(tduxTOEkQVOFfi<(n0d`-QU)2k!*uBiYO0ct-Z5Sn)TFpl z(t)&b6|#V8-dNk$OLaG6_z!Mv2g8@@SkJ27-oNF`A6c{n&o!=g3B|piEpMR}l$DH< zS8f&L7yDs?h8f#@TZ-uP(0JwfNUgANxlLD=q#Z8drrt@lJGJeCRJvcORoMG3T#HqOT z?Q>iKd5S&qgw+Y0;MNCQyqHOe3-X(1SrzsEhx4P)dL{(N>o#R6y4j{%Go=E)Bf5Uj zdPYOY>d6HSF_J0qg;AwgkgW!*^Zkta=v-Y9~ zV~va{k8xZ38+ujYL~d@Sgp%1#@%05*fbRx#7glk_H=3Eh{Ycf3u~6+tY<<7^dsbWF z969!u`aQgR@b4T7;an2>rtd?t05s_nJBqB;M_nj{!X7S%fT zB<5i`S4)}ysQs1cp9N<&DBLm(i?yW{Twc9(bv@rS9jF=tJ1@;4f@w zri!Q{hNAo^pOU=HsEtCcYJ`~Qqs%cSKW8CLAp)ay&oiM8>a_3Vy-V962X}0(3!{oo zSJ1ppNBPtMzmYw8%iIuW?E5XYaGT$QEBE+J+nK#^`woRb>TM3CK%d3<@X^6HvqfIr zyZiI)JJSez_&w{KP?;P`%J1g8fGsMfX=%y36{nZpG8>(^pO2sZSYl7_G(Wk&=h3m& zm1bC>9&r8m)D8rFCBscj&*qK;$h& z_O-s}Ls!Y>q`5e)dwk+ZzEeS=!5y~8?$VOwFNpMO@lCq&4n%6X^l>WD$Fmx=86A(O zMZBx2iwt&J)0%X>uM*3L@UG^wxc-v!sxmN#(j_BY2Pa=qeqKTW?$f-EQ48(C*WHdL z92bpLU39rzhwH?+BuYlK=U|%?`$?gZ7Y!P;w{og{r$=_-Uq*W?b12zv+31#`(j1{l zzTXN(M%!w0g1$Izo0_^9aOJe8NRPPg`B6?m>nwwyG}#@eNA|=@U&VaOo_toAQsGly zxP96slBhy&18-d*aPb`pTdr(LFfeY?I`COh{EGL+DKGb#$+kK7OIK}ziOwj8C+7M> zUZ{z=lAz@6JdZpV*O(0i~b$2 z@0%$4ZxO0H0kr}`y7_bK6w%Ep$m5-aPj?aAvL!aWit_&rx31_a>gytvAS9DH)i zukmotQ*bIK@x%E-b04y9V-w{wW3OzabDmsZ5(v^ap7&m0L#O*~?WW<=O=bRQ=*lM( zHO%e~*9};X!|ul8Y)64kdTtsnT3|2VSx~9uWsWb6j3U22C7F~C*Lp&$V2amB)^id3 zm}(0>vrz_pE@yX}UwhDa32V0+2(I}Dr-y>!rEct!K5P5dceV3{qAPM&p-Mq` zdCJudQFx>kxWhoX{h0u0es}f@NiW=z)V#L6+r`MI{qnwFR+LG*_|Q^Ur0z9nmkQXG zB!%Q74hW?5Q}5G9c6nXQXCf#MR9J{9tk>e&tKT3>l%*>ykfu9MT5~Em#p+V!6^uoV28cGC1S4hq@HR-QLKDg>I%)GrXg+2@#s~hT1hp1X z+k$tw8|G~7r@QibS8t5ajnrcqs6F4S!9ZUOJU0nRR()L}ySERW?q)I=L_C(UeOko4 z9~MY$BYxRj;;H;Y=&T}=Swk213?wU?pqwn$S=A1+%_q>GJ~moayP1iC@YTCrT2ppG zhEgUgT~62NC6SjVVftKse;Oc_Y70gix*iOs*P$L(bDrgs_tzyQ=jN5?Nv|pT8fRu_ zeGhH{WO|>~$Rmq+=COGovhp%lPguB{c1XIiz%j-H_s_BKD%6jAQE)(sb|k8GbL4Rd ztsvit{93x{Ehd0Q3Szv9Z0g?-?;Yta98IPw=<~80ICsOskDCmvcj9T{JrD1>U+wcb zOtFVnk*mYc-!r;GMh8Na&b*e|Nl%(bpK$KuBQ(C~xsRt?c>pKE-+#6N%s4rY{vF zC*{w3o5)<0(&YHNk0Y66?ykd>`8_1vsASr46n&)3GhBAECur(-qiHi@ljfh=RyGw* zqun46Q8!e{GgC%9HyGVtFJNbds4c3(SuWTNPhmh}Y{B3`w;s;egF$`+c5zF_>mtQ& zG0?ASK4NMm<}pZIMgmV>y2J3Uyd?apZLdkb#-Q%!i`5o#58w{-#S9{rU{q`%Oljy+ zP9J0VSu^<2#TJ#yBwc!|pwTqXw3Qc>IN{J=FHu{S9nu*UTOW1+6xE~iaJY`^PVzW| zLXWA#XY5v`kq+?E5|2#B;>`pdzc*dIeCpkSBSmMgQ5t*N=wdl zLM-?W7A=@pZ`j$Z4>2={K3$zAN0@IXo{a@Pf=lsKesnw|wQxo;E}v|Qm>2*FOD*Ro z@_@7)hNj0WmTjL;i3}B~9nO>nM}V5-&qU-gy6~s~cUxFUI&Clu)nVrB_s#qA6ex0u z@gWf>mKI7WQjoaXg%Ka)AoOh1lfy(T2ol}>LGs?OLC&?65TnEbC`kM%9ep%qDcIzQC0P7E1Az#*HWjiQ<^Z~ z72v|Q6ve(H#~AKqaedaQ^v@xrJMwaSd7T@qq4{5U+v9~M6;M4DRrz$IDkIYZV7%B+ zO?#)(thVl<(GBPP%6DQW$K$NDeZ^RC;$(p%?Osv7LC~Gn3x(8ym&w*PJ(1@d$xE*n zG}=4jaZWOo6+PGKP&Bh*EtJ?dZP%J+Gb*-C$&r=Ch&m!G#cYhy{LgQk_D5R($clg; zs4B=SN)WBzlFFpNyCh6k2~w2M$H}Ax=qP~tv4zHtw`U#1*3ypjg81$SEBTxBj9IQ2 zqj!<&eFPL*xR=8K3AtGvdy4b#prjn=+KMmY%Z1LTOjDV^5U#Y!Wmi%u@*V5*q&0mD zv#QjvAfXfzO#_t?^ThY!Ba9+zkGA7VXoiIhLp@(|D20p+PN-)DZpW8IEXuF6CMJ!n zyw3o+<+MfGOwQn10-4=IfqWNAOa@goMfx~+9s|qTtlnTvak;KvQt9PXLdKv32$XT@ zWLCwRl6-YgPJnAe*GLT`gtPB#<=|`ph$%F^%Q?5{hA_cnUX)tYK|BgdjwC*bSCvN{Pna#X&qM9|e?Z%??0twBR*2 zH|uhkhpgGU)rla{>3GrsA=J@V7Nkxss|5{%@FJL%Fzk=~eDdWocQop%aEO@JqcPZn zoy9&-6o>G>IVcD5@J!aO4P^ivM&rqkbKw+}n5$xnl3S~ng4_ips+WYM#=guhav}k6IXQ!CobRt+ zs-x>o(1U|Ect>hkf7c6Be%rk(V?@N&NxV_sbEf6t+MXY<9}JzoOA5dm4^*$nVSE-R zty zc?akkw6X6Fc7F+u-B!J_M4vk%rK4ZX33IE1lUL2XnKwpz zx_}|`Jwt#ffHiyqnD3B`88P3{8SBL{wWXMKO(5j_4ni#R9j+A@0n!!+tVx#yC3QtM zIq;9j8+@_?Q>dqBN%d$ zU3b*))0`7i2=G+^Z~W6wa^N!&rzp_3{M)2<^mtr`-9NCtZ$iJ6QF+f6{idsojDCcE z#0I6lGVvdRN4zaJMu|Vxz_&qOKSC#Bouakh>(W1*5jvw^Dxo~9uY2A+A!U``OgXbe zkNrU$Oh6}ZTCpm$x&9%ItY11JJg^lwNjVGvC1WxMM<$@(Nd{T=bOc}Gld6AH2}a2U z7U4HAqD5ySxgYy4&jl&SLlF18B<0L_+eP2~^#bO+s@kz!@42%xHny<7DM$W1dHM}M zXXu5S=TQyI{6at8etEn>JM09Sb@jUjvAt`9qVG(IZ#V7z@wOdMr!ygAFwGx7 zic6r=p)wGb+jlR?_8^g?QM`g^2Sd{9!~=}cQSJY}@1QnBiPN%u>(;WlTfvvh>$}RH z%aW|bOzPyu^T#q|AyU7rQ0Wqc5*|*U6CeZ~lJ}tu+Ef;Rc$T2~mR7F8FOA>GG+*?e zYb2l}$>LvreK-w(f1}GWU-9J;XX_3g+}bGUcbIcEQwzL1&b(l=(wR^cR-1C(>B5|| z1gF3@C3>>Vy`%ds2X(3}$)6Rt;;!TJmKyM!+by*=kd;Fk?LY=}i*}1S=h| zSA-9rV{W;V&J4(U`z||1h>T)AQz*xyeRbm#F!E;zWhDIGrl30KED=KOM4aL6Fp>}< zh{%1|n`T~${j~g$cTeQMPpkYIKRVyU#vrhvar<`@Z6RkB!JO9BVqp`a-SOSQWp=A0 zyNF4I-H|r!DH%;RgNyf>2F-U)?FSIF*woSH)sZ%5laZK2QNk_!hHr9B!Y^te^|qml zJJ}-8>+{kX>D(T9)@-#`8EU&jp{y{dhlW!(s#wWFg>(BmN}oShNmYQf`wgw|YP9&Q z+4!lORy~I2N?-c7j_us+aHR|DtZ%Wav-!-kv*{y$rdb){=X*^|uDUN|yWr_ynqqqOT9m<@DgTIuQ7AErr8 zVx=k(Pq20A%tq%5m?KZp=7p_;o&R25;(I_EJ^n>;FF_ic|L?hvyDv~TiRKGGd;-)T zFlm+?pP&`k_P)7iW_X=`UUjy$ZC1=LL|KvTPh5ve|=5^p>SI^k5b1EMEaYY#oj4*o#Ka=zPa)IL$ zExOoX4_|g)4e5%B(Y7b_*ms68;XT(9{7ra$G0^R?k4C&F_>W0c_Mv@pl=F^9>H zuF|xol&3$hDtBxpa$;LMl0#|IO`J3M)C6a0j-u3cYR`(rt&U?ELGPaZ{L2|t8H}iC z5yI{``T}^k%4_K5)sS|*r3j;5o{7nv)|oVypASDx(?MW7lKXK3NxS`gq5B|KErpGAc`o5mas2Y=b4v^;=Q=#lxi^gcW*2U)(Fi0xw^ zAK?L)2;vc^|M=q+bo%S`_lm?OF#w^605JK5_ZE5Lf!|%BKbjlqoP4`GOfC#)?l<*a zm_lF~xGxMjZ(eeETJLz?g1! zR>d71<4p6pIj)zn0o}_cJhy?w|8+V%b`4zt4CD$M%;#{RGuz~UrtzWcN2^pL|b6I0Y|yXuTVUDH!MB<^|D2!WP( z_ibAB{-kmc&|KU75js^3BD!_XUmob=E#1BprAM1&&kw5LPvV3PD zUm0WXS>*{p?P&*o80DC+K*=j@!#y1)WN#l(Luz0|uLRWk&oMGihmMs?y#z}7FNNMY zY$9obvfj|8u=32kd(iD9FV}ZyMz#dN;=9s1a|jfg!bjelGjHR17;Rn6;V|0oSpdv3 z*>4n)cEV<}kVCNP7Yho10e83tUjfS2ehFD*AW1*K8#9?==3`L>&n51#m;jUXHyaNh zif~%nx6jsg;?tM>q^j*lA1Vk&wv$!uasY5BmrDY*GEfwe(%6>rGy6elDg7U!-w@Xi zEUIxoe};a>Du`masW+|Lrv|Y1i}scq04{evX;W*O=zd9QSZzKWCg`*{30tS^j}1-6 zE$y|cHn9+($1->nogTMb`o%qfdY`72pYKFrFFz;ZK1pnM;Q{5pOJ#nA;at7SEvmL5 zd?COzwUxHonm^$8=&QmnrjC!2g7jU*6g9?|l^*^bpsG^a@87Ba>U_?4L;WfKi$FAA zUmTr`GeE;H4N`QN?(j>qsB zEKg;qs4cn^zi(ftkVdke{_4kN)!%#W{OT~?i+g?>C?BWtFWi0XB<;!`QV0#Jj*xrt z#$)Lq*4+7g46-!V5W;xM(Xjhh6*z=Ww~rCic*!jf^{I7Vn99{CRnh1dpZ`_88ty?I z{d5)Y5mVHU50*^xv2f zs>o#rraL|qaM;Ic>Tg zV;~;bSX6tF&B;L0z1LJ+L-qK@^QnRZNx>;Sq`N+M7Z8ArHKn$IdhE(9R%f5l|{uxt`2V^K?YN#BOI!h`gQ@K74W*s z8|CnEv*ns2)L)F1PCGzDl+r0A5^uW)SBWl(z3qTOPH>fMNP0_NLZ7?u^mi&~#HVDW zCg}Vb(toT1q<{>4OA}k4D^6T<2o7S{WyP`HHax*`&bdFI7!s*2h*Cg$_4Y3!UnEOk z`BEKO36N+qhO$VKro0J^oH%bdT4S?W{^VNYucA$XJMc?2%~R6we}}8_C&(WMNk;mU z9yIJi*so@rVgv!(<^Wb6K?RljfYtmLFYSoIW!2!QkY7@>wde5M(@iF3_7dB0>py-C zUNsIN|CjnD?ESxwm2v4&OTe} zm-&$Qri0s!0xO)r7IGE`g)sT|!SBQEQAS8lU&3PpfsxYuuTGg*~MN6Ykw)kI8j z{#|?+cJ42~{j9rmSUY7?m>)Xbb~29mhq%OCQa5K#{q9aqjFd;J{3c(?Fn`*sh;#K{ z;V_?btc$%Gid2T<@TY~r0ayRU_jNAAoZf8gH-8pcCB2}Mv9qW)xO2YeV!VNIiDjo` z!2E7c`&9VM$YWJ3++pUqqzWtr)kh>|^W7hL{D@EbKAxy2W;@hcOJW5ITD{Gr-Ybpu zpMm>l^Zm!2ftUIul=Atapn%mnNE^rQioKFsgDAmxVYaGCQuf8g`xTv7AMUjVlftLJn zo|r=&?45YQ-j%ki8`hCTtRnd=5xBhu$fsHiRx=Gui)ieE)zM9(jM?ad)>dI#A*>fg zOu@KvS8sHva%21~@d*_kY;@E2)?dDzsX&OnW758jv zP2fD;z@vQaOI|rK>6zXxjAjl|%f!zXoIH(vxxH|=hDLj73;?}nycxQFpT2f`6C0-#%iOMdKfGiZh??1^C0QS8BcNk@Vt)RK!H}l0T z!=lJ)DDJwUWbUtm+n)B? zc=*Sby@$sD3wl%D&5-$6*r~s8RFKPKX=020;)~GHw!K&q@6rBzK>W5e&W>L6H+PQW z&^#+ZpmwJDY`Y#Z!FzO`GBK|7%h~1%7%N+vsNWQ>BTcyDL6HNhZmTuqmIc4Gj;ZB& zu8mJZ5g6#n{R6Trn6^$!3_jGT<&zbSJtxb)H7T|ttmN@AkP5}b#q63Q*r9r+QR|7U z!HMF-rX`m30z3rl>dWdhnC5rOZ>WOHLqtVz%>}Q7O=Pkr`vXHFBxO>{PKsIlDBmrQ zVht>mZ{5BOsPQg?QDwqvhXQ+F zuNG2)KWP1JQbJ%QDzq>FSajkVSMRyyWRv~923(&n5|r)LLzAAg+Bq^p(olIL!GTxD zo|t-^8{6b8xeC#^>RpwEl@wJ_g7L?em3gR8wi458LHgFYLrJ9(XO`8b47_42kl*^a zy*9-Z{$c=)&iL$TFzxQ>BidP1C?Z_aE0rvRjB2+-ZqAfS85z){*pK66^D7Fk$J_4m zn#%U}c8p}7^M0_ko3pPmSlvir&bi}}ZH8saK4Pc+1BY{FWv3n8-vtz4O_}A$G2Nyw zE1*?Tq$Glv6y>F$aHhJVP`6!S@>}=aTdNz9+w6hcu_p)HTxsT8VsCo}%cK=`a5}JZ z{=lhvDgO`=5pyc0a0tJ#43L?!7f7L!+=-UVd*zeTPudEkE9$*$v&@C7WkFY(xq#1E zVbS=q8H5{XFUgI{9hxoGo3@qM5*T|oHePm;dgAsvr};@Ap&%aiZ8ks#Ed|PvWw4^f`S7=g5j9ItX&=BCF4Ku!}s@F8#QzlI#u>c zfrbzbfL_VoJF+Yy0a0?ABU9|nj~zLZA4(ipZA+=WS=8M@(gl#AIp+M<^j@Y5UBveb zHl|^(kfrmGqyZ$ApVm+SQA#K>FSw`}G~(ssaZE(d3A9NlB&Xy{`L4e;F1jTjmD&H6 zL+OXx3Ch2eOR{iE^QgK%dx4QE;5Rby4SAP2PkmF=pWmx~JSK;|V4=^2E&);Sy{cm& zB62~8$6>tt%w8e-7UVAu69nSoR>?y2Q60R`sv@loW^=D;YidpDjExCq>b$@Sm@o-A zkAl!hzJ%t+HBd05U}A&`K$7cH5X=<>PUK7S3HODlj}fpjkQZ0iYFKh8^@SjB0AsPm zGEvM1kTEuC?Twa<3>9_UX;_J`&-)aXdmEH4QR~gJ!nRX<=39GYv1gB08@dq-nllRCm=6*Eyw_8uXCf zR}F`9oB5>nktA<^R;hG+G?WJHqPexT?i35bn2`udA9*+OnaF+O(A3(NCxXXI(9Xas zylV5+EA5NP)bdQsLY66w7^=zP*uK@ z`86)LG&+Y4r=1h4ns8W`_TfQaSar1DscE#2F8>KsT4;+MuQ6KTqw7;ARObYxQKUVF zTyi;-uv)dfveEZ-EhsZ?r8wQd3lRj?BB2oC4yAqGHM`>GUGH=&q5yQa zdk^UCvP-)y%U>%%H3&&3XDB02pgYWt^zXQfX46u;jBO!f#*HC@@C(*zw!K}GP)cm@ zTZZa>zN1;07Uz98jW1pVZopjLn_NSfIMD~(BH%bT7Y*6ioc#f`^*mDvVVgYDjNy{18BbZz@CC+;rgHh~Ge(OBz<|W`-}09s zans(tk)we@Q`@^4;uTGgy{fq=%ElR%0oP|ZF}t#L#1wt7X`O7{U1wo;R{6hSyD!l~ z35KpOf(@d~dZ*nxQmSuXxt_y<{*}zI$_>q#~mtMJqo|f$&KsJeuqhti60gjA$^S?eEOu^V64xs=uNx3cn3_cf^1O72qP0TwzW15vafcMPhWo@3`xC> z(V~sKgc%L=Dgx)sbD2*=K5J3AmVdjwQWfTJZfVoRkqrr>Pu9k)i4|;B^yE zB@nVS8}wV!{I(8up}fyl507~tR%LZG#bSC%-0RikOIx9ua>$Cb7DWW(i^TWhhNBvw z_WI$87Yh4aTp#*)q4wk!y;Su7C>cXS{l8-=Poea8ZctoG)jP^2V>^yO}=cN!nUslXtNgdkQ=Zby#}HaT|z zyOC2+-NhFt;xPJm(dUtSmPW4h^ffbLUt@2mQSTn&Iqlle?T}P-x|qU?#mI8Hg}R&% z|BM^xKYqP5LP)zRsg01Jjjtq{4D^=yA9jgkCowED87q5S|j@Zlmyg4 z()Du6`?`lfe@9Kqw%W;hD3npX_h1zA9~yCFXl;6w;FM~Yvm#atdbe&hi|tvRd$=8K zy}oq*#*Lt7Kk32d6nTLK7g;4|jBg|Nj>%X!69XXNk$B3BL zjj4Zq%4xh}=Tq#O2Lw*9ef&v1PGFBS5AA1TdzsQ*U-Z{Ouf|_~PXi;}|LgyrV_sP& z3!7c@TJ0D&H?MM|7MM9p4eXY@Sq-M!cUiBy3g5V4@ehXLhuDSGxe^5b5?fLq7Yr&? z7-LZExw;zNqX4baG%ImFu0JWkWlOEv;G#uGG$u_ZAqh!GAuJx7n&>&j#0kW^dv)8D z6O;)t?hRH3zq5&lK^dfd2Xz39^@9qwB)zjd0=o8-P?O{YtKDylDomYh#|sOjEvs0d zVViY#J*S@8F2}z;D}f)&O{;f1JC<4K?0$8We{K36aDt?HxtAx6S{lT+4FuTO8lL`- zr?Ru#=GR9-8_D4af!ei^CKq9LB(%y+y-?%|F}dMG#>CWsyZhuFJ44Xkf;Z~wn%Shd zjtt`G9*l{d|BVf58OB5eKg|{XM~h2}SYl5^v@T?8rcrv*7~fzjK{Vz|7#Mmy=p-BM5$+*Re!TH(+!+w~E zNaDe(TE(>KN>xJb%ti)>5-&#RNvpD71^FB3B>eU+{l;K25e7P}U>QMXr%scf|D)JW zbJGQ{RFLIK5ACcvHL7E`rd&sRz|f{vinpkckNwbl9azwKSJYvF_|@I&AA%UkJ~rc^ zHJ7OAe*$gU(R?8C%L}C3(&tGLyrK69$;CGCZ|&wMwgI@Hi68sTGsISGQcMOr;j)k ziP@U%M0Gp38ge0d(8Z(BwYkxnao;$&gP(0**gHS>f(Nf&NC(kJ3i0WI^SGMVj$k;= zy-#R-dvQdpYpDqV<>o8B)}K3xabvgbT#DRUdn7Q1u221g&HBqFTIJExuZ#a-`D$>e zx_!D!SL!`B9#(w;$>8?Kdt~*AO6N%A+%V^`_vdkOPx89S&KcJh>BQUr(J=>n^MYZi zeBR3|0$m8!cM5pHm9^*trme5ah7@I>m@3~&CBQ6Si=v0?^l@hNA+`r&jWeT$PoGxZ z`PuTYN~E!_&$N%$O~dX|$4?XboGMY^_yAG7a0yx9MCRm`IYpo!5NxKEPxsj1Kk8Z< zjpi;~bkyW6z29NSr|mb@)!5k3TtC#$@e4#r zRI4>oNFYrW&D zwVTScXpQB~7t3RT5n&D`^T4A z0U$$$rNPJ@*H!yGQtQjNn;2vyn3)eg1<0SM0jgR_Fz}kdssuyI=>|vp$zZI%GOke| zYuJeTmbCgjpRa^mM?$NF%o}26LHi>5!4JPA3CIJI4=;GQ4`3pyUM2>Xe|Z|C6r(3b zr|oY2F!i$b;9RB8=&L69xte`|`dB4Gf&zhCzTNRdJ1*13_4x(?r*+_jOgG~KA!hYf z&y;y=h>J>I$W$`u39^KjsD5Tz1Fg*t*CgRtU0NMJ+gOLDW@ZAhY5`X?9mPl8S3mo@h`5r1x71HFVSsIt$dU*;ja~4WIk4PS2DrLpZCGx zTKK%@Wx<%cB1(2}q2ZR(9HMMTc)fg2dbTozikm>e6t+Xdol;m!8VYdTF8y$a09d+x zPG0x$SPxg^D(#gAXW#kNv+IABS1;r?042CK~7wO$LHFSmSFu-Fz8TTizF?P3mN; zM*3eNUL>WS7hW0?dkH3<3xkC*9sK~-BAaV!fay6N0VoYYLwL+0(llcV`#4cAkHoz6o<>xmH>Wz8O+ND|4+w9Tae0@8;cEsH54KT%t_xVnEI zpeMngwxbhhhtdS|*1_N_M~1Wm*K?e+gHk{$F_{lqNN5Grk3wLUdl+FeND}W|c$hb? z;3t^X%?W5G-?7^(slPoaS%s1CjO!8m18 zL>=m(w8zr4g6|bT-tJ6?`ww2VnN0#sKWCbvZIBEw4FdQWGkek-NzFi!dj=X7x6K1; zi_%-hASvbgwa2Z%h?W5cPIgAfdCQt$;u4cm^Z0FdfM2?3X)MUt91~oi`HPIr*$I{E z@o#Yj?mpN1ok@PLvO5f2o%D0CfRbKVc__LlKLY&uc2Y6dQm~PPD%|Hvr$vn#;;^_}#bRUfkeGSv zHsCM~_k{Q>JExgCuUBl~G-17<5#TU~L;Vv}4x1Z$9 zL9MGSm6nwDB&kHa(cJj$)y3hxFM%K{rSqkUyb7p?(f6;&fp)7Pycm9HrMLFOsb;Es zq>Gl10lz#Z2sFKuqJ2*ig;ofTe8>R5V>QER{A^7Qx*gtvxPO+tV5d1+z$KmUKKJo_> z!C1!s#pX?B5Or}4pQqHx+Dt3!+<33po`D&}w9RkZ1%zM-#jbCiFAseUzWSHgU%&tH z=aXa4@BPMpU844J>E4%jdftDsSd6bW7m#kzfbRNKU$QPK^!$10*n^1i4JXAI8ChA4 zrm{DRGGJkB58`ythwhl3`}uWSg<+Rt|LVDZ9MoLkvbK#)fAaEJ>(6i963&1Ke5}y^ z`6XS($8FD@$_~E##Iq zl38KvkX)YOYAx=NS<@RYmBXUc(W8;{Jt@V-x%GeFdH*l*B=!-yuABM!8H69$;DC$- zzMNcHpH)I4l-x`VGLPH0B?Squ!MmD?{;Ic~&7G>}0;OhK`^?J6k13bBor&l8m#|D~ zAsRy-Gs#&_UYqw)MFpiUt}zb>zI9|!7mx>(lcFxi25kSAhdNa*cD$6ds$3d4lXLgZ zc-Ze(sG-5D&Y8#0E44In{(9e{VyCcY;T~5iTb65{EDbmlvy^QjNoU3vupF0%`o@`V zYm3Lrr-Gz}#yTF0V08Fg2Ju#QQ7v^QRk*yYz)g+g{^unbgp!2z6yBiqUIC1;_q+EK z5~Wrmu~F5iTYT$^Ju9E0VQG~U?qm+$Cr6^}X{u~fCnLC-bXOBQy`>hw}Ls-5^0YnN!VZf50E&4ast~PN? z`xBwONz;li`Kp4_rws}xIXYDBg6DQ*vQH4!p>H|)vVQ;RPHvz3(f0M*{)v2fYT1Z? znWpo;*mQbi3T-i@JtPvNzHTL|&#ac1ZT|EzGxKrN`-*M0Ot}kN8vMoFBgM)7V-7GJn5V5SGX9^z>AdGacJ^!8a zuLyhp_FnZ~_gY)DAr?hjhfSPDlxj2 zB@ChB_KKiro7dad#TaH!gxUW4B@mAbtCVi(?epCo-8fmC%>~$&+4Xxa`sF{*eLSSm0P!vPOCz69< z6n6dm-!@i88rS(My=m^!Z7qR-JgWzhZAVKQTOiu3&{2W1AEF zx1HWr0(M*PA3N|ru>bFV80C}rIC#9pmOg3RBQ)eCjoaPb-NecYht0+HxHd0u02JW=k=@5#7j=wG-r$rzb3+p`igNc1vNaFh`u>pr^%EK+7mK)7bi&>$;02nAsb5 z@*{ADUx49y*B-d(uu}6PM#m0V@^8s3CeR;VKhz*2WgGQTvo*oGj}y!WOIMIytHSpo4?Si){wce6dc0lLtD6~)I+~y6r+>f} zIm|zU0lxrK7Qd+C2~=sU`PC8>TiOOm-eT^g{`N1|n~^`&5e%#l*id!fLu!HyQcmm5j96pACbtza0BT7ueMCFJBtlHdB3GrPOab+uKj)Xan0X zl_BqUNxQLHL4jc%Q)|RG;zw3QmSk0!Ba5FLE@cha_VW%(gX<0(lHhOlDhk9H4uCJ?dpMb-yQaNAEVa@Zo!wEhGGJ*T z#Euv!RO8FAtX5W1QnKmaT)mOI^yYSK-Zx=g27_Ixr9J95-{tNY56H<()|gD&<@Lr4 zyAGrW&o#IA(Z1A{Zo@>|2~?V0TA33#&0Y5n{uS5M)9IAHT;KqyS_;QjA2pQKLJK_! zXkdCGzlzB;HvlDNRCIMmr83fh z!8o$F4Z85-A$>gRCA~s9=neDMpDz9n6qmWuuK!WsR^zkSJ;Td=(qH|+nHO&z_|~GX zY+w9inIyRD=&z9Rz5)-#SDfU#zqV_Kas&Lgqi?QnUwo^|J`w)HekXbNuDXBJU+#4D z(Q60JSPLM|UEiP{-?hs7>(EOWV_!AH4xWE;^xDmVX8waolI-;ge+#%c+hOKUn&n2n z-`UlF^{*iFqRWjfu@hjyYZp@NcS?8fvibWL^0N?iXTcBt_{uR?XSC}+>)-2eGN{Jj z2M3CLK5P*V?FvEt72Lin`>?(Bnz_3Zx+OcNzdgP5D+ul}>aDZ-9sJdTD04Tc^MxT@(JTc33&3-PKzobEvOt&cmxzUeRpGLOj3VhmqZrH^teN!(HEg z*e6^`7}~!0aR=2};NTh|2Yg!o9PeDaPG+u|5rH^4Pg!o3sTjsw+JmvC3B5f>A+a80 z9(?5g`h7R8{QP0|)Tcwkr1AMBX_SSlv0XU6Z+L#z*=5P|y|(yVSVoz;hErPFM^w>A zY*8f^l|3Jx%`a=iAtfVge@Yx99Gqr&VyJh_*}=1{)LSk&*(Sg~T%+uZar-hGeHlm@ z_+oy^u5G7%yBhEOl`LqPMPQeG;4GU>FR%{R6yYHH^|I5@%q`9)PCqJk1jxC z2!=*|Mh1P+#zt|5H)luhS`=i{n@(os6=dZ__`(iiPF1H@p)$&f3P!p*2WDr~%#!k5 zBuuKEpF6*z+O8EnS7S-rZhH#^Aztu5T(<`M;W^5(hexGF!Mx1QB~+Lo={Hm<18f0j zX{=|cZ=`2o>MALcD=Lv+!fCv!?^+zQvAq#&5c2U$Dk)eD))?6>YVzztX5ME@J(F># z8&2GgGp87#ZEPA&Pu@`dedBU-8|37ofrMQ_(;|$Wj@l}xCNGDBGD`f#C!@a*K!^4xVf@O+*vb|tJb$Hue$4`%sIH~dFfa$ z_dJ`;Tq6>;oY%lU4eGtM{|(i$`Aq;ojL2#^;lI!VE?F+*Z=rG3(&j3e6E=W0Vq znDe~J%8l_fy`6_QflENr>pJ`xkWiRRY%+x_XL&nzkc* zNv|pHX_dY*CYTFW{by%ryr)d5_8uiNGx0R+knptfq=*{R5QInnM?*kPG!vSZrkCEHiIw^SefCZN)gKETZ^Vli{r~ikWTFPK$SX?f-frbor8qC$j_-9j%Xq+$0x{EfNH{8ymga6n+d}?~dcFN8n zC2SZra`He;=4K)fE{-eCD8|^pm^kC;$TP!}$gAy=kL3~e^z^8z3_1o}4=v)qIGAat zj)CC`MMPbq|8uG%`RLkbjK5YnMJE-+`XE{kn(ncs+7z zMxiOv4IgkhDUqsa8KfN>Wp!_?I=lvbkU_eQ&FQxs{$A&IS5sfp&&)3v*m>Y8=BGWY zH~tcf*#96Fo3O0Rw4ON+&(gB$goGg;UU4iBjv!?)=!6$-T<-*$ipno}U-*Ua`o!w% zE+s^zNWm0)ecwDAf#{Gx&;oky)M`RmE((-{ufx~m6oLY9Y;y1MfHu9mQL_Q|QVBocwa~bWd z6>cQ`o3ZQq-O6i+Sn&@c5m}nv}DU&gMm`7F|rbG8go$0|Z%gU;g3W}agFy?R7 z20Z`oq;z~a234CpjP(;PRRoTh{{0RqMr|v=J|L#H>;nc=?5py=S&E7jZyaj7i@@bf zSZ5>rOQI{N(NBuNIYe(@03nWr_a;g+|gCk<&=|(>>V#7j;@+ z5d&uX#XY6})phFa1pXPC3H>tYOZr3K2fSBp$5Pa!JwuJ%Y=3aZkXJ!)3nKqHo|7T# zCnDL80|@ZHL_dw)FmnBSZ<`0*jV`y6hh2Gl2&!bBsCN^K^c%HnhY|sV)A*&)p}YIC z#}%(aC25;}e~6c-mqvg<3(FyxFz_K15}Fw%K3M!yQEjH72AIbd4?q?Lkb?Dv!Tf64 zLN&jPM(y(J=+IZUFfZ*SE#@fhD;0hOx8Wz7t+0UCP`WPvf7~1eX7BJ)Sy09mIg%x?Cg@N#?5z1ETp=8>Uucqzc+!=1QsUv1cY)65R5dbKkI^bt`U#}Zq_<4FV zw9IRNhJZ$rYUfE8EOet-M^D3|55gH?^013#* z=aRH!gQxA>c=I!44cUx1jS&6wJTLFVb!Pf9(KIlc_1wJtxjkUNP@1(-L4TN4V67I5 z*H&bnOSB4o9S_Dq!jsjxK6<9x0ljIfhy@ha)zvoOkA8N0PvON?K`2Y208R+?qTIAr}8teP@iDm zv$k|)`zFXxM0PDyXX_zHeBW~V?0OhpZOo5*s&ll?r&L8T0$Kv_v8vbapA0a@(q%96 zBik7Rh&xjNzF=5e?R>w-Ig{#r7M4}CrD^0EO>Wt4RFBN&0pjMfICehx4`y&`TaUOR zY}d4>@I6=qXGgyGb(%LaxS-uNU_ka_7KeSR&*kw?R9F@QR|vBo*r;StmIg2nZQ)Z? zQfFpKytIUcqqCW2B3eB8CdAQs2WS!xN7Lhp)wN$=_A_O>57SR)<%RS4)J!cTw(C*N zZBN+~d%Gpr?b^d+=cfYO5NsP?@SR?E!K~$F)gD1Q#&%?L9m2pZy2bGPsBD~6=+*=4 z@5ow))EWT89wm3GTK$7$<;0>yjNK^j09NJHvI;OaMmEE2rMUt8&y(es6MP_|Ph5rr zPTL2ep5%`dsx>zI@V-r|^)O83&SXuFTc}l1@|aCxwYsfMIPVNVl!0kHzynPUE%+G{ z;Wc*utm7Ds>bQox(*h(Nel#n^O@$ngG0ucPCxBw~xRIiExCtx%PLLb|pzT>O^xa5T z(kpG_T!@vVim#i~hqD{?O_4HkrbsnhXRa*N^ThBlW{U|9T!Lz&@|7qSe7+rj1Iff* zVYQ~|lmyek$$fi6`A^an5=_uB{M%fAKH>_WxjI zOWDMZltF@eGc=`js5!%JAVW+%wBbZ_WQrV&~4(vSOAco-Q&_8MlQ1e z@b$X0&a+pCFO1zoZIh4b@#K?0@VXh zCQ$*XGf;p@BCwsGF`m$?VAmfxz2dJ}lK(8GB*`mUW}Kp{L`8&U^j98^RrQG!5Q|*Y z(6#9HY*|e$qei_t-_(&o{(_SJs3myWvL;eu99GN25d|s{d$`1k-H4T{vpG?^=5O1g zDkokx@5LmC7cV{U@BUfgagA4R{03|z0~;RlFNgF1X5Toh0kAH^J_dWrMDrDR;}!-- z0p3S9r4BkvJqoi0o4s+Klh;D|vRxCuidVrfF~Un(ja?8_Tujt0t374M=f|1QJE*v* z_xCH>7J$7^amFN7z7uQ!umseSZ(h;=Zhwr~oeKh~KZwYN7?i>Wz#TPla{zk z`VT5kwrv>}pcwPVt58K?q(aOqCEUklNSEG4&VVAB=4#*o=?sv7y0lykr)+@c1D8G2 zM|3%ENSiI{v8i=5g(ENIqY-z^G{vvLjGGs<5dzr|x(Z%WH0H=n1s{L8$B@Cn`R-s$ zV||#kV!3^iA^V1I6yZF3u$USs2*Gewje-Fmp?A4%=0W}uKyDZrfZiZ3dVKf|2nZCQ zpgsqGR{0Qk-qE)im~KlR;GML}+|BNW4~^TqMva1+jf&aOcQG+dvDvFG85iUkz<+vp zR)dP&d3M6O-v9`XP`M}BMAX7ls)UR8hK83=M$2PRz@zQQYPC%E^5~&AZ>Im-;e_@~ zgkG>Lvfk=fzADf#;+@rPsi^4MRbUl4=?UaIS+_D{oDs6&S}t@KQuWwkED0U!xK;JA zr{G{L3S+Hlymeekr8_5>WQ05sXLw@;sb_olUj*M23F=LqiXul?;k<1weV(BW2xn1H zj?oB91){&sQUl6I1koqpb^x%=IfxU>ar_~1Z@aC#wBd|Ph8|J^>t#x$k>#nV_3r1`85W4U{zH@W;Q1n zcqgFk&-y=jeMtK*4Xdkvg;uZ^S9aC?uJ&&tE}@qH_I=N2iwE1^Rb4$K*6oyb zK9KnX%G=Re4S(PH|6#TMgC(*|zM2`WGb7`p*YZ;BV}zK%Xlz#8Up4=eEWa>prVIFg zg%I$+;r6uCpuN)+I(-?@X$Z8pdN?Jk?(VVtzpscGEu!Ycb1F{X?HhKfxsSNzs=4Z{ z^BKfT)Z!z{gGgXM39>Rb67B=FG4q<=kyCte@8Unzz-LwcV>=pp|PMe42T4vz2M!LRHqJ!)Bpyw{eAaP0wkmak&{EPAY#+!k6NYE5&Ea0xE!QJoK(xn{6~In{fFd*k>qs z^-8xAewlKfckLhB1kH^@0f!8#Tky}rABIVRDgC2wjJv@pb`n%U7C{Csz~ozs3Pk>i zS51vla-*Df;AGX8(4k1%+bs+;@2T_ts9_>?gdiqc)WMg^qgPN z;YBuOT^0AMC(er1?yNfWYAXC|5Czid;H{6~sfwmO8-g}9ddL(d#ChNaug3k*s-0I< z>Ujq$k6h@1Lcr*FUmBv1oL@g*!5X71$l*K`aD_{v0Tk9U-0m+d6g`2y$t_l-DD$oz z_%|avg7ZHK3e*&T$0xbPpDj0lXCb%_^KRQf9MLXs9oYDkhlq3Y?kgt%vBcJ?T2q}* z*|#XSDPXJLVmtxdR{(*%tzyIee&>w6YtP|Gbx}3M2*LvdAfV6=Edr@tq1GdL29luY z!BEd|wB)?IUKVX9Z8cTJwf|oH|IoKk#JHkn3Zm?|Qsfyp_ zC6pdA*3hUU%xuERG^&Ff6(tpL>)@A2?p9AVub77+}bPN&G^jdDfB8Z2RZ8JYXBZS8{XZ4*pW4_8&h)*j1b z%~(}(z;f3z`z!cAe@iq51t6xsKk!#+_-6+2e-Y>F|Hs5XCn!#MtyC&}b7?O)s4f_5 zV%uo%rU*X%sjB!dzFWw;(;egB>e5>M?(Tl;>rGmPoF9)dd3OtEewNZQ3?UQ-oZn7) zZk7_ZjNDZiJA%DBcLgsx<4S5%Nqj7}=8??7RW%4?O@Hebayse z>ilWb_&7;zE4im4IQwgEV5P^R(#{jv?1bRK*Z8HwZ_bx>ABxLeV$2Zu$uY05nN17r zY!d)!g5Q;sdLgh&lvwsYf~MP$iHNTSgsH5b?2h}t|6Ke{G>VkBl_|d!}@e9Aikr`78wWOPeflVA;jhQjZ*I z0Z+uWtHYw0;`in~vzmJk#VRhIE2Yp`7of7!^R8}s!8^BK*LJyPccRWte*~W{6})>O zI8e0v9h-SUN^Dt}Sw#53Bdz%d`S~f7N4owz54QaZ(knlT*A@Bza#8B$hLxM`H@_Kk zDq@#|W}<$7jMv=q-ptfqZZ#n`MbJoH2+I=TVNP<^^89%iV$rDTWxmca4E7VsW_Edy zYLri<@*ha(?in&RvSwE(SSqIvDQpy@&b+d;keF=-SYGo&XXle-J${NljU?))ot7p- zAM>EaRDUcTHZuS5*pt#7Qt5H#NN89j<`7XPYUS`#_SP9*M~4~V{H)}g961rMU?-li zvxTlvD-jzHP;3NgIhD1QI#<$@N4-0h(f(tbRy!(If^z<|v(0}{$Ic`ki#q4GEQDB( zBhq^36xlu1<5vf!H8H2+?>$_B*QRabpW{NH@me)oQ_uNy-EJ5%0B=FXXS3H*e`lrg zAVwnIvazI8u|eXhirXZS2eKmLwsXVG)OU3#RLf#IKZJ$W*+t*jvF3*O2F!Q&bxeuJ zzB10&oSmh(OO?A0ZRI*Y=c7CN8{k&^09PuvB5 zz4>g#ElO`?Z2BTF+m|mXZH~nOfK(59SMY7RMfsQ~^VXkw|1?2G*C07|Xw61>zuK-Y z`ilHw<8SG2;_|z6maJ6Pq{y4=`co3YydoTEVxcxtHFPaeLZ5$L)nj&`WvMZGM|uM> zJWpbu#z~e0(A+Lv)r_)*ZRkKtW1{z|Y?OCQ#TgE$&bIHqpTg(u_9S-? z6_>ab7&*`ZJz8hq*WBC;+&OHuKB5Ox56eZ+;HjW|yBdp7PRk#zbX{_VSXMdq6zOyM zeVZ@Te2X|upV+GF&>ldnrf=Gy%yL;C;-qMI%uLUliE#iL-D36=W7|;L>v@&tNM8Wv z_k9I)6MhDNbPkWC*l>*vDS1OTBu(V0N&t;AbvgsBUwsTJFitdrm4EYoakW8IANqy^sw@4HcEjJDjWgCGcR-UhDq zopre{nPI;5u9~fW<549%!8FP4uB%QBYYbDWJE93rNN!`ct&MLl_{ulFzTtw_EDT8( zu$dUdg=pQY@}g_oWO=_J6fAuj&r~SVN;~{5`rSEp9^N)C{Q`WwBuluTQtf&)aNdtx zgW7~G1;npb-z3V6rydkPHP`2fiGV(uSoIx$Nxt@sqPO;*58^vakX;ZKGfwF{v?*6i8oDg`Vc=(qlPR&h1mg?3J)b#TX>7cYrvR-+c%$-0t8p1WtRpiNs|9v-FCn>nO`!`K`j66 zsG3H{-V}4m?gG2!2E~9 z9A$B4Pvb|XfqbfZgy|#SaN;`kcJf2P3o*PtK0in*sibe+GL*WbDPi)_{*L$W-{~f) zzGr#OB#{-z~Pfai~VM#U;t$-BStyfwLhSpb_{Dm!whtP#zbjbOxzPK%}s? ztwn6}qT>E~mQfEc#e`%#3Edx3Yj|0qdq^FMlyBM0&A2$F~~@Q3A0#!f)F!bLcj00$nt`O3!9-bYeB>a&d%9B+SK4^wHu zt)*5T4z;6>AU4k|AAHJ=s%{#$_sXzSif78cZXs)O?~&*bwF>~}hwu47KwGKua6@$- z@6aWSQ*Eb0u@0;ZuQ{lVYmm0xDQ-B{t4AW)XSCc&p&G8fH5Kk;ExM!ZL2o7VZ(K+1 z`*=<<^YrYo&#{1Di)PF(6oh?Gv#K4jGtHrJ-UP;F?d4avyaDmCNgkkP>KPSCd_f%A zneKW7v$tq4!`C|=b|BmSwzO}e4Oce6j>T$2Rr z<}>H4?PS>m`^5ahES zx#F8%+L}&&X=!WaJn`_L-W78}@KN_gn5K>i1<}54P@jzLOsA@v6k8|eSL9_jB`51I zIJA=JgUN87^R+>deBRksi8DN>I1O#$rv2#?*(w_g?#o#_X5Q%$QQ~Tn>FF&C#Q)$^@M_Lh)iAM-Y+s$SXn$5aOG}VLw!CV@6wC!bwbpV- z$sZ~jmRyyk;X#2Vr|AU#0Z$J4wmUJu$MPkhg@iK(XtXCnyOhKW#L`NC}-dLKm`%Q}zyVRx}0A59L3Fs*` zV;7J{(W1598y^msu|@%UMOUCnPgisEffeSHQuV2vOq}oQAy}66{riOlsIHA!lk|g@ zwQz7N6w3U;t;QJ!95(VJwya(OI@|oJr(h|VVWx=}Jx`QhQ(WH6shx@`nQ@bGNCz=( zS1(kGzR7F!+iA+CaUY$5@;uVEb^}&741Yv<0H;k$y<%-vgNW$1?B$TZnj*+-HhMb! zGa-u=RTG~CF~FeT2qGF5(kfOs-5iN|E$qs(>j{t&`2G5!{t+VCKG(^1iDkO#NA2g!R0rQuyZT%`eT=?x*ZOZT&TfLfGKC`kdOH zC1BDU<-r4H(aVEw=bnbWTJ{1iVb{C@*>X z`06S^F?dNT=ox)Mm9d=~6qNR>E*@{+x?loc->FxWJQrOv(ORuL;=6onE^9XFLz`k? zpUj!r#_Hi(%o|XiCHea0O;6&DdCKST;=&*Y-h0@;Ll@T-6^9t{Zk=sgba3!27^WBZ z5FOSd-C#=5-hj%VIAPpEe~GZAR>tY1XexV8CsZcVx4}ngQ#feDD#^3bN1#PD*?dav z2Q40cm)s6sl|4WEx@N?VrYUabNhYtcmsBn81I+7Hj-QDl070+x%Ad-owRthtEgIhe zL)MXUq-DQP?I#WP(2$T>t=qA_1ecNO2ihN6hq@^tC2v~*?*R}66_*8dA2C1etXZWN z+Pfjq2u98|MlbWpS&QX|fUhaO51~BvY#Blq=Qh#Lpt(#=n}wqe^k8@nWp7iVx_o)Uj>k%Qd2H4d+%^F!2`;}v}?DkLfl+dINyVd4P21Z zs3Qlyhe|1C3Kt6Skrb&}g*?>7O?mtsJbVR8?+q4JQ$6MDM7%Bfp_cPUVy|Rg3<0*D zNJ&;bBeXqTvNa`OC%GjP0Sg4Xy0bpFq3Zl?>vtom-&=$vQdCk;%;oQTy7JS#0W9t* znc37)E(V^X8!k-@lnFC?Wq_1XFfE%`5Vm?Qdww2WLI_(A_cy6Kf6I+3c-W*ikkGj_ zmA0)f2u_;EXQ&*A@?|bGC^o}T|HLeNfEyp+m&xdyL{!ORqe(fAie{G!SvCv z)$(-V*_sFAWdZVQGA41`&Cd$fg{bj+G}ey5n< z6(gaJA6&7N=8=)Wt*dx;V;Sj%sk5q1@fNXYc0lWCfeVOhc)iTKIkE}8J0eQc@6=X6 ztW{CjUMCM@giVaU&(f;qzjbsI87!i8&oL0FG1=YDda8R%KvF-jQz4_^Q+eASx4`$h zn8VM?n754f5)QXxdTXEVT zuxNAwyk6TVA2cUD`-m?1+~Tsq5MZr}01wFJuT zolAOEmwk551)@`(lC9B%|1hzYQLQO!xc`}JC1m}QV*6*(XwR)>R7gbTrdy!ffYse` zF4C$jEGfLHK2>iZPLRKFy&>`|cO4-}4Db?h^!3Y;#*Q_Hbn*O&(q`31m`_z4 z=U|+DjVVe2Ds4Wv)mMA!qkj>CpOSOjoFmgSH)i~wv&g(Cl# zSdU5M;%hcuWyv8)(o%r7(-jD1ksbP&>Ewr5d#%w=owik^ejhGrBC1Z^ai6D;;W1EYrUtr zQ~$n3%@$5%&m~ZyW%@0qwj)51aK+%w)|>>Uhi2Cs;ncCqykJ#+$Ee7wrvuZNpb&h4 zGXLD!MyUBSXi32AU905LpVzwWB~^;Eg$8Y2fu;8w(i(+xvtqX(_t2ox1^Q^Jc|>zd zTwwOGhjDXZ&Cq19-;Q3G92#j4^>ce#6q(!FcH$B92!3GRRNNUDr0^aqY^Vn{y zwC$%84OauN(7gJJb38jzt39-3X6b8E5U;gr14wK>JSj%fO`h0-+3b?#-yK91wvp6c z;F2K^yZU2a4wrTEzlN8yvkc2)%7wNwUYC@^yj!>@L&eP!;o@qsBB>|WckWLV8zu0_!Y&uSAfy-)TO^!c2NHwi9E`xf{Hx9>sZNwC_ z{GLlpj55RjHZ`K=XuPG$MMn0@&haB~O&+nfnw9I4LuXS(X?dHrAR%f@iT2KJmDstC zYph}H*WY#Dy)abKDhgQ&)JS(47k1layY! z!3gYQJi*wPvvFAbnr@VCT>wdrL#7)qyO;0UjE5JcJ;?wfY8A7>Cx;Sfu#9oj9}BD{ zY6eW-apM;~2recQo-!Fd)V z`^@XviT7#SYWk&CQCsM^@V9eu%_ndpJrU|6{BYl<@k}&%^giaURp}IkwffXB=X&5t z2EW(TM%})TF^x0Qjw;Eu8D)A+JMf8+dB@}{hwmu6DkzpW4}{!&5*n5)xD$*o%h--Z zDfaXXWhx4F&yGi<8n2E80h!utY;i(>)b1gH-{Zb@`h<|M4V^ZIVt=>6HGTeaDr=Us;7Y1cv~XWnU;_*S(}aBzhNxNaW5rutCG z5Q$~<+!|R5D*te3>wK~S#Bnp^GgfN2g_gp)apdzoy@8$y(vrRhJ>1RVxqtNZV(&F0 z_~miy&Xnk_1_#=%4256Sk?RqGiB`r$(y_wufbs+C@!nRgEsS1UC=q#yvbKSL_`mS?BTGw${+@aVKyd5uJx zJa2#*gnN3DCG67rCu6j~Q!Ct5E*omlcydm6pz4fJM+S_<^tO|0z=g}fvijA7o(*P! zn;#d7@gEz*He~MMI#$NY%Sc_!k@bj;JK>=jTdy3OtbcA?gJfQQm7Nf?V%RBeVtbHF zzZ>?9*DnfaxGY8U{1b;_{+oe^#1P-k43(|aAIxVW+It*6GyjFH=;+Jb80s zU3Xjh@l;@}v;X&56fA2~f?RDp;^!p0r9`RoP$NHn=9Rs9k5O}<7y??AzjtWb$=)8> z2v2d;I7QzRQ|rFdMU|z6NfGQTpauvvSSl&tZT^;CI@lQM;E9OZZ<{T@+Qm-4E-8(megG{sk!hI{=zCFe+$h#sP~7!D$f^`} zx9--7IHQ(T5DT|kVI@epTnWxLtq+ANuK)Ob znF3FLYgUxhaEbMNU1}`P}!Qz2dWjJwz03e1ah-X62QPoobML!9davLWlPZP0}MSLgFbEKHs%gr(Ko44Xpv3$KJ9F`LX)gjbrhQ)!qq!A8D zwaGbA9(kQ3`8!-!Wv;Ne)>W3FM;^CTXt{h>6wLycC+VCHm6~PVb4UbrZS>-&zb?9> zBBzK1V7r&DHHS7X@l0#1xu)u<1I3AxqH`h6@`sPtNdaPn|15d>07_EE8M2;|`XTLi z+?}{w%|^FxWp4QPJ@URb9s6TYSBDPn(g!T0KL>(*AOq($ZjbGf<>H7&SFoo}aJ&3K zYqXR1{h&{bfoz3OmlQgLJ$&4VIHOK7_tnO-P`Dgw%^%WUy}3FQiCzKT9RyL7d3Su= zNIld5`w(=*;=28f_tTA?_w2EBQP{1E=3|i3ZZ+bK%}xFQCOCFUP7KkDaLf-h>mMgz zp1ii*U3=rz$O(098cud?WMsS5&jcD2IN6^zI$>6zmDa@A8-s#}gg5nwCH@X%WP%4< zQx8~K{$jDo3v;+OxJ7eeF570Lv-4(;Qlj%C%d0Kl->8jN$MDy&_RXiIxAMUs z(8!66j^wqa_5wa)5uh~GoIkOSZWTrt%OwmQa)FsDwOE205U=W!8~8iKi^O0vZ(e7Y zCtxl{37QqyN?j1*Y{aF3aL4(bbBxYueO!luU~ay5^InwfHZ%MUHgyU(;UIriND|+n zV_7=uHLNo}^|6>bkT?qnHC&S$2=;Bi?GP)ZdV-VGHHfLHE8)$Vs^Qx4)`B`JO;Nad zzPaddl&BhGE+NuOGmtbX^noRivhOVH49yJ9mDa0s=Y%j-&G#_KoPqVqYnp$FP227L z`FfzW0MMawaeDBBRyiU&i_Mf4MVr&XjY>qAXT`IaLE4*rpNXm<`cY0bl1DrzhIQ3;_#9TzI`X;)Um%&;?$I3 zmgs)FZ;F=G4l&aH(w7EswR5$%zJgd6niRzk|E`tkUlg55|DryA@d~7KDegQbiPEU}Sa1?N?q^9Cz6tp*H9V1CLf5Vo4xE44l4^B?L3 zzOurl8%k#CNU}<^Mx~|IB@xl$5rizOaR&Kvc&4mS^W|(Hh7$6>k4dR&reI{ z2r=b0M}~!jG;gyiS;=+JHoCQajpk`&=d*6Sm*cRP{RAFdG~w6|7@7}S{gqK1EhKd#WN@+OSD;r}`e zS@SW5@1fIm%l2Gj#HxXVM+w*7l-K;cuQZ&E^AYd##a(b6#P0DM$HwZ=R? zbRP611*Gy?r&kp$-gT_noQ5#sP968H+%9(z^o=5ZpnCYo>OK`FK-5mehEPN>9 z&=CJ}R_&_=pY_ipB=uNTxs(2O>Dxh{IwNw>h|}cNgn7R5B18E1dz}}c(AjR=KGf7W z3vDX^IvS@6g?`9P*0JrgmK{Nu58g=^VYoKKtLLs*ms0|^eFMsUVYAGcsl1Eh)9X0W7GFRb_k;xXDnJ{8n zMlK;V*h<{={F}(O?Pl$y%~O+%_4%oj)(vdDib6bP59a#AYpi$wjr)tEz5R-8f~l7a<`ypPOXfJ@FULQx?=hE= zDw&ZUzB+yW_^CFR9|gO2eQ*V{d;aVnlYQdlPpW?(W>P7k*LZvo*Oa=ro?i_$fj0&O zkeBKCq^haaczjStfoWZB)3BJPiVuAjaPe$7qi)Y~9X{XK(CVjaTjc3BvGS~vv#%$gzEgh4H>#jhSs|Se%cKHePaR$&JQ#mYFjkvTHRW8Z2;(L$YEY} zzcJMdTK}6-NHE`LZ!GF6G2Y{PN^O$YlTFu|e1S>S1);(xNvj`5mWb~c z!9buO71=O$}}rmKtjZxR8aHr~k+-w^9c)q?7;;5!p&rjJbM=G-DIk69WWj$?rzT(HHmEewjT^4`B@Bj8lxQ# zm74zUzD7;43TN9XV)c|NbGB2iduUROewOsuY{d=G|L5FptO*)ysLJr1Vo8%qz0@`_(d2iuz zD?^~g1>;hs>Gn|NLT+CFS)Yv*HHxI$Fo=p4ed3Qs2U-v;%NER0qFn@ihAONV)JoNr9&gsna=jXP zc5DA_u~`LoTwfye6QG|4Z@D;k63O5fyxv#Es%yOQ3M&Ycu)zRAK+8Op=VWhYh?>5% z0w!bROhaSxdyjZHT8JnPt+{mR781RNe&z^xwpG?B7#Yx>J&TwmXLf$2mWWZ8YWC)Z z{MO)xZMCbgRWQv5>aQ}Hz={UFFy3$Yf;4x7p2eSc3p)EUSW5{q_Ls^*Is}?|KsCx` zKj?Xx`4YS(t3|+4hjE>J-e{w@I>yGGvB%>K0ZE$K>7;WJaJ)y|@po+W)Wad5VWcZ+ zzgNbwe0Pv9? zHto4#Z>g~O%x(A4564d*;@Z3SYW>sGr<<`<&D|IlqWAIL&Bv~sK8gvRjk3M?*)LLH zE9}ha2d6*z1~UA~Nz*C;53k{JQis8_-c})sXmi4LXS#@0hi;uS1T=?p4mBnWuD8NY zfl0F?23Xs#NYBLbChC}$YB}qX;;-3}BNx5m{Alc%siJD*W-|szWO)+eyZz>eZlB4A z(5ZCv4;|c^#l5@h)ELy#3G{7TC#VmuI(+e)Xb>{jW1OBhGQTUcH1u(?ukgO^UCMQc z=wI*Cb|)?g#-OIdrntX~UK9b(cv$vkj`Q4T@bckb?8aMcuw|qwNe0ekxVWcq)oH6( zZdBS(gI4e6LC2T=c;F4uNsdO{QjCYwq)@G~m%U1xbdt2~%zJ&yE2OqSUlri+Wqh*; zP{5v(G8viXKdcaN$2#Yt%7fOp#imTI!V_3 zam*(Qn#EdpjF9`sQE3E zNm?QsC7b@g?ykeDsdU@V-1kOD0W)KvORfcwjvxXC$aSJJioi&fZU!SAq=%M_g+xF> zkzRr*66q=>lmH3}(o5()gers(2qd8-?+eb{ch~&`-dgV-7HehYmDy!dV9mE-HxnSbxuP@ig6brWcRLO_ug`k4y z+$qZ&9Th`-8UT zy0>4?=2;je!zL9Jix=I$e|JjAJQA`cJMh%qO{LoUFy^vpy@fp}VB$at;ajxPUrsmW z_Qfa{g37V_>y*g0LGKKu${lWkO~eI4DAcK_usCEZ=hB{3etF-oW{%q&IfJ+i=K6HO zT9*Q1QI(lSiAfe}td9v}w@uUXgABO)EC~oeO>b!`tDmgTHI%!ax=w9AlH1NH-nwWk z4$3LZz`5!7g{%!^^ktJm<-0PVFt-7~$rOF;=Lcyc>TBciw}K0|L> zKH^khnk9PrcV}kq%Pz&I!qas$F$ykdCn8ok<-9-grhkei=79@Gt#iOD%n)mEO1RiQ zeUw-KkOFC3D?Y|M?DFH%0nKyg&K=jq+?npHLOv_4a|o(ga3pTrPRgbZN%$w>4m8^R z=<@gh;MpN!tPi%YG6U3y{={qttj?j@Bk{UIR=H*l;UC`!)b&@jD2;^9iW&M!KwNm| z^5xXzw6#5jCv*VnIWwhhVyX@>>$@S~Hjzey_>b|uYY;f$r{)4^UxpG#cPFJLVzR#> z-TY3tSd*QtT`&W-!(32^lhHr8%@3aq9m=k%l65^hNB}4O~)%k?q3oN^%mL6FLE^R{Z>oU_4hI z4S?MtxIrlEOvQ7!Q!8qdP)G<0y|~uDq@@*zqz%w(kav6Di%hE6Z0o#| zCEM9cI@UUwiN;;4(>%9B(pc^uNi!irtt_nY5oURFUjpu+g{w4R<{LU8P@ZHH90+rU(s1efuOBxZ-~q}S8yt}C zxDG7&)bC;S5~5sA9H}Kau^jt5+o(+gP9I*keh5)f*7|WNLmkyA#H$m~e8~};klpQ- zkbUx`emW4Mq^N#~`|mIbA3TJMW+jZ#w>2QcwT!z%Kz^y!9h3Ym4?FRyfT~sBD@NjT z!kVYqrShnSmnf+QG}%*Au{x}tvKW6l@poWVCt`QdKQC3?Z%f_QBMKW&4E83bgDnT^ zm9)@)W4qu;lBRQKuYiqL#D%v>+C^d|W7YoJi~6P+7qqw5C6IGKlyM(h^Y_PPfuak5 zTN<>Q8#KQLBV^amhdW8xNG9`Sce`bzxeMk{%j;Kr9=BG#^|yud-?(+htvy^y0PBhxyqQWDp)Lv8)T7Wi zLhqvQI@T7~ltY2|lNdr_1KQ{09!xTLNT4m14(Q#1^|tQa{iBTAY9()?jh>)fx7Vz^ z9V-KNbBk)`m2Yx6EvO=5r|4;qFe?ekiC?#a==jPozFgCq?gByzI~HwuTdSB)W*p_19@aC6l3B0axR% zF<0MFkRd>stNrL#%#54tRmc)oyvNAqj*hxO*5;G5-$i@5d5LIV-*R^}!--B*9^VAk zfl;}9ZXg+%hm2Gjm@{y|fwh`)j&0mkY64MN_&7UmZvc**yv#zEq3Ib0zydDJ_;`^rNvDT zX#*gTxea!hGv%0n$50k1#27auq`%ND{1F|m9UW^ggV;13msbd~&9T*63 zrey`(_44vcWfSXvZ8!l)G=*=R$tf;A5p-WIMg3}uqJk0()Z1J9toBBj-?Qj9D>5S(gLwT(b_fHPLT+4y5{@ zdOq4BaHy9q5n?rtRzCr7D91$;v!?xnSji?!aqmnH`-;TD@hkrVWM2tKdcP7rCSVdc z7CUa>ej(k!MGMFR2W##x%Jwe56zc!iwB-Y$C|DUO8ARHq&6-~JAu{O@B7nhP`07)f0@$G^c%S1n>DXTiM!jN zOfcI3lvnAa&>9U*(@LjiOZG(!{QD@%g~ze09P3h1Wyd-m81ck$^&R_yC|7AF)qAIc^VGwF2?o{pPEDj07F;}ZP9V$;g{&W?i z6Mj-g3(XmlB1OAOfO(sXqx`nT_0D7VGJ4^jLo#RQ+SbKN9uuCf^FI9iE~z!Oc44Z=oU}5Q$?6{i zZ2I(HlcDqQFgEmRuqAGSYAYI;c$tdlBp{H6jDK5;=5A@i9XO!H{ap)( z2wG%TN#sXVfj0h$MZRn*-Q))Ij}v?;P2bQ$r)Fb zHAAQS=CrwKdyh*rITuMB3IXyT*)|ZkC9Ue9AiO=MrVt72#~| zg+R;vEFzkhlQ| ze1JO1tITJC$W@gS&>S2*M{kdlib*ap4HD8U98O8*#h_eMZi4_SmFVGS+~xHC!s9sc z_1P(^^K+BZHaGd%_FcfnnIUzrB85Eg|j=9H7aTo=v z1klcNpV4xg5Ob>b&unl_MD@?1zAe!rEm70{_-n9O<6!#;iMH-r%(V^bhgJ7_9-_p) z=I8w+#sqY~X{O2-dsCtYtH4HRdJ~M6#sd>^^pJv5NoeqgRgUwJyaFOii%z8x8wt!C z8!!SZJryR9U6iYidZA_=`)B96LZ8)+?2>*)gloR;&J#;3OKU4jP_EV>oU_{V%20oo zr1mE%@y9|~3q6f^h?Jz1QbE3ALH=lZcdXP5|IU##j0kFB^2WvsxkUGnfr9e#jedzZ z+t4}-BM*=VF?q1(zbw`22YA}X3@cO_ zlx4qD*pbZ7hbAOjCnO~%g8zgB!Dh47gNaGtFEQEL!V>-_FVD->^J|jFFRvUOaKD&F zMY#GKc_{Wv^JczENiKbr5~Lo{`rz}ZN4JGPI2Wvy?S?v|XJ{5xL3-JtA~;W$dG~a& zN%EW-t|9A3tl!>OnDL4AZyDJ?3SWF{?(zQqw~7m24t}dB@OP#ES3l_11lx2XoNYp8 zo!~iNwF>D?MK3{UUX}~pobnND)c$TEsN|Kf3OOXKtwMh^m{IRwA#LbKV(boXBY3^1 zsZTMx_@M2PG_#HexQGh=6q7@H!=!An=und~7{A-_R1?wG)@e%xtB#&2f(1J@HiJHw zV0Ir_JL2ZNCr61sd@P?$`5wU&JKqquNKi#|Wz?vHWS}pszxTqOX_m8mCyoxZ`)z{| z+(~E@AD1Pg9dc*gNdaM9I9e^pk44nH(aCtZ;Mn593lzx0cU^0(ZQXT@2%TVRbY;X> z&x_QKQNf*o+r(g}9uYyq3e{x_i4T$}$>~9@gX=CGc3b&yQ^B=Uv*Zztve#O>12Dc# zMW{KEnIr6F3?Hjd!cKLcjo8|R>Uw{DtrY2t2f100rJG#QK68^C9z_^H%<#Qtbu!Y+cXeEC>vwfm^>@4A~dg4!^6cs<0#e&<(2-{&Hbu z_w5akDTY4;i4fd#T%!x7rVOOq_B9@&4mCEL^0(`=aLk$b-j`vrG`X(Fq_6nbo1FT+ zGV`2f^sF@wL%JQ9RksVz(pZ|FZMdyoD%|n!h#?0y(g>cs1E8KtC3+S>tD;eMj%I7O zh+QywqZ4~!{GN=^r^F?{MjrJ8m=enI&%Y08l8^2qcG&(DRB4MYEh{{k@s1G&b>whu zGfpp;2J@fcn))g-xyY;|#ID3dYgwgbzq3dYyR{&`-~0+l9yXpR4;0b%IVK{$Ru3>( zLz!j27PVO&Q(vYx-t)X}+^0LXc0YhXLjJgohaW2uDcogljEAhlWjnpH29)k$j$p@}+2x1VC9^O+M4_@Sas_6p` z2GCK zueCfGi#HqR&F@5ZDOSA)E!}|5sg}^()_UUR)ph|ie}IoO1~0wF6R`{9-#6|*bX*gt z#Smm=EbQ-|Wo5aV5D5O$c6Xyu%Q!2__qmJ7ZYXAK;yG4lS@b+ew+eiJd&G1m1DhvI z_d5%Bs+GC3GKh86aYF|V);(AR5f(@uH8h$2kc@%WI~*O%3R!fmrlni12hHK1w0=!H zVD8eqz2y_o|6xzL)aDbDvq2@wmLRf)X66ko#~O^%( zb!@4k)8_3o7&35)?bbEQ2y3q|B2szsPAZa};=p+&WFv7iLm*%`Rm%m_AidUK_yckR zjW)w|?o{|oq`7+^kdmrcc|_yxkHtX?JlwM^aaU%KS!O-XjIa7tFi%rHq2+s_jX(^8 zmi2$Lwen^9QGrYU<#(V@_4P<=d~61KHyMVS zwFF{EgpITSiy?mAeMz#Rz{uLWUGGLR%9Y$0!He$)@s-E;X;$D|rI z0r004_=^?)T6blAj{cIpKlTv~D7EJ`t$F%TvYs*T!?PpEoq^1nmZ$~;C!l*I_A>l% zTX%pVL!t}3Q_$hO$W=9{H!vtsASYm7?DqEE#0~Tf;OTd(XRh#wVx{f~?;T&(!;Ikl z!oZq!S(h}=_Xtq?#33hJt7j{f=Y#qVxL_g>=46I} zs*ZA9Ge1H=&Ifpt$5v-IfI{utvkyi_wr&dT19ah?*9)b^@?o( From 209cb4117be222040cd026602f083bafdb2df3f2 Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Thu, 14 Dec 2023 09:37:51 -0800 Subject: [PATCH 008/204] Feedback --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index e89a5e0abc2..855a8c66ab0 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -98,7 +98,7 @@ At the start of every dbt invocation, dbt reads all the files in your project, e Parsing projects can be time-consuming, especially for large projects (for example, a project with hundreds of models and thousands of files). To reduce the time it takes dbt to parse your project, use the partial parsing feature in dbt Cloud for your environment. When enabled, dbt Cloud uses the `partial_parse.msgpack` file to determine which files have changed (if any) since the project was last parsed. Then, instead of parsing all project files, it _only_ parses the changed files or the files related to those changes. -The partial parsing feature does have some known limitations. Refer to [Known limitations](/reference/parsing#known-limitations) to learn more about them. +Partial parsing in dbt Cloud requires dbt version 1.4 or newer. The feature does have some known limitations. Refer to [Known limitations](/reference/parsing#known-limitations) to learn more about them. To enable, select **Account settings** from the gear menu and enable the **Partial parsing** option. From f4693b9813897634602b27fdea54d8089f509633 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Mon, 18 Dec 2023 13:57:28 -0500 Subject: [PATCH 009/204] add rn --- .../74-Dec-2023/dec-sl-updates.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100644 website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md new file mode 100644 index 00000000000..3f43222685a --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -0,0 +1,19 @@ +--- +title: “Updates and fixes: dbt Semantic Layer and MetricFlow updates for the month of December 2023.” +description: “December 2023: Enhanced Tableau integration, BIGINT support, LookML to MetricFlow conversion, and deprecation of legacy features.” +sidebar_label: “Update ad fixes: dbt Semantic Layer and MetricFlow.” +sidebar_position: 08 +date: 2023-12-22 +--- +The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer and MetricFlow. Here are the updates and fixes for the month of December 2023. + +## Bug fixes +- The dbt Semantic Layer integration with Tableau now supports using exclude in its user interface. Previously it wasn’t supported. +- The dbt Semantic Layer can support `BIGINT` with over 18 digits. Previously it would return an error. +- The [dbt converter tool](https://github.com/dbt-labs/dbt-converter) can now convert data definitions from LookML to MetricFlow and help users upgrade. Previously this wasn’t available. (converts from lookml to metricflow specs). ROXI TO CLARIFY WITH NICK TO DETERMINE IF WE WANT TO TALK ABOUT THIS NOW OR LATER ON WHEN IT HAS MORE FEATURES. + +## Improvements +- dbt Labs deprecated [dbt Metrics and the legacy dbt Semantic Layer](/docs/dbt-versions/release-notes/Dec-2023/legacy-sl), both supported on dbt version 1.5 or lower. This change came into effect on December 15th, 2023. + +## New features +- Test From 8117a3cdee24c10b139daacc8289ae7204428c41 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 18 Dec 2023 14:10:03 -0500 Subject: [PATCH 010/204] Update dec-sl-updates.md --- .../release-notes/74-Dec-2023/dec-sl-updates.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 3f43222685a..b2db3ef7adb 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -1,11 +1,11 @@ --- -title: “Updates and fixes: dbt Semantic Layer and MetricFlow updates for the month of December 2023.” -description: “December 2023: Enhanced Tableau integration, BIGINT support, LookML to MetricFlow conversion, and deprecation of legacy features.” -sidebar_label: “Update ad fixes: dbt Semantic Layer and MetricFlow.” +title: "Updates and fixes: dbt Semantic Layer and MetricFlow updates for December 2023." +description: "December 2023: Enhanced Tableau integration, BIGINT support, LookML to MetricFlow conversion, and deprecation of legacy features." +sidebar_label: "Update ad fixes: dbt Semantic Layer and MetricFlow." sidebar_position: 08 date: 2023-12-22 --- -The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer and MetricFlow. Here are the updates and fixes for the month of December 2023. +The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer and MetricFlow. Here are the updates and fixes for December 2023. ## Bug fixes - The dbt Semantic Layer integration with Tableau now supports using exclude in its user interface. Previously it wasn’t supported. From 17dcbc1cc80a08f19e2dbd274d3fd2cce230afb8 Mon Sep 17 00:00:00 2001 From: rpourzand Date: Mon, 18 Dec 2023 11:50:02 -0800 Subject: [PATCH 011/204] Update dec-sl-updates.md A couple of edits after talking to the team! --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index b2db3ef7adb..cfdc8dd8b69 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -8,12 +8,13 @@ date: 2023-12-22 The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer and MetricFlow. Here are the updates and fixes for December 2023. ## Bug fixes -- The dbt Semantic Layer integration with Tableau now supports using exclude in its user interface. Previously it wasn’t supported. +- The dbt Semantic Layer integration with Tableau now supports queries that resolve to a "NOT IN" clause (for example: using "exclude" in the filtering user interface). Previously it wasn’t supported. - The dbt Semantic Layer can support `BIGINT` with over 18 digits. Previously it would return an error. -- The [dbt converter tool](https://github.com/dbt-labs/dbt-converter) can now convert data definitions from LookML to MetricFlow and help users upgrade. Previously this wasn’t available. (converts from lookml to metricflow specs). ROXI TO CLARIFY WITH NICK TO DETERMINE IF WE WANT TO TALK ABOUT THIS NOW OR LATER ON WHEN IT HAS MORE FEATURES. +- We fixed a memory leak that would amount in intermittent errors when querying our JDBC API. ## Improvements - dbt Labs deprecated [dbt Metrics and the legacy dbt Semantic Layer](/docs/dbt-versions/release-notes/Dec-2023/legacy-sl), both supported on dbt version 1.5 or lower. This change came into effect on December 15th, 2023. +- The [dbt converter tool](https://github.com/dbt-labs/dbt-converter) can now help automate some of the work in converting from LookML (Looker's modeling language) for those who are migrating. Previously this wasn’t available. ## New features - Test From 6edcf6509865a31359a6a4590d76adbdf9390f9c Mon Sep 17 00:00:00 2001 From: rpourzand Date: Mon, 18 Dec 2023 12:55:02 -0800 Subject: [PATCH 012/204] Update dec-sl-updates.md Diego recommendation --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index cfdc8dd8b69..6757a59b86d 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -9,7 +9,7 @@ The dbt Labs team continues to work on adding new features, fixing bugs, and inc ## Bug fixes - The dbt Semantic Layer integration with Tableau now supports queries that resolve to a "NOT IN" clause (for example: using "exclude" in the filtering user interface). Previously it wasn’t supported. -- The dbt Semantic Layer can support `BIGINT` with over 18 digits. Previously it would return an error. +- The dbt Semantic Layer can support `BIGINT` with precision greater than 18. Previously it would return an error. - We fixed a memory leak that would amount in intermittent errors when querying our JDBC API. ## Improvements From 2eadd7126b895110bd816849d8f85b36850ae29d Mon Sep 17 00:00:00 2001 From: rpourzand Date: Mon, 18 Dec 2023 12:57:42 -0800 Subject: [PATCH 013/204] Update dec-sl-updates.md more recommendations from diego --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 1 + 1 file changed, 1 insertion(+) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 6757a59b86d..8f0bdd593c7 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -11,6 +11,7 @@ The dbt Labs team continues to work on adding new features, fixing bugs, and inc - The dbt Semantic Layer integration with Tableau now supports queries that resolve to a "NOT IN" clause (for example: using "exclude" in the filtering user interface). Previously it wasn’t supported. - The dbt Semantic Layer can support `BIGINT` with precision greater than 18. Previously it would return an error. - We fixed a memory leak that would amount in intermittent errors when querying our JDBC API. +- Added support for converting various Redshift and Postgres specific data types. Previously, the driver would throw an error when encountering columns with those types. ## Improvements - dbt Labs deprecated [dbt Metrics and the legacy dbt Semantic Layer](/docs/dbt-versions/release-notes/Dec-2023/legacy-sl), both supported on dbt version 1.5 or lower. This change came into effect on December 15th, 2023. From 87caeaaef510a9b4f950638dd1acb8662f284059 Mon Sep 17 00:00:00 2001 From: sachinthakur96 Date: Tue, 19 Dec 2023 14:49:56 +0530 Subject: [PATCH 014/204] Adding --- website/docs/docs/core/connect-data-platform/vertica-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/vertica-setup.md b/website/docs/docs/core/connect-data-platform/vertica-setup.md index 9274c22ebbe..525e1be86fc 100644 --- a/website/docs/docs/core/connect-data-platform/vertica-setup.md +++ b/website/docs/docs/core/connect-data-platform/vertica-setup.md @@ -6,7 +6,7 @@ meta: authors: 'Vertica (Former authors: Matthew Carter, Andy Regan, Andrew Hedengren)' github_repo: 'vertica/dbt-vertica' pypi_package: 'dbt-vertica' - min_core_version: 'v1.6.0 and newer' + min_core_version: 'v1.7.0 and newer' cloud_support: 'Not Supported' min_supported_version: 'Vertica 23.4.0' slack_channel_name: 'n/a' From 5748454b5bfb9f52b65c5bbfbebe1a5f152f4768 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 19 Dec 2023 08:10:01 -0500 Subject: [PATCH 015/204] Update website/docs/docs/core/connect-data-platform/trino-setup.md --- website/docs/docs/core/connect-data-platform/trino-setup.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/trino-setup.md b/website/docs/docs/core/connect-data-platform/trino-setup.md index 354e95ef03d..28d158758e3 100644 --- a/website/docs/docs/core/connect-data-platform/trino-setup.md +++ b/website/docs/docs/core/connect-data-platform/trino-setup.md @@ -282,7 +282,9 @@ The only authentication parameter to set for OAuth 2.0 is `method: oauth_console For more information, refer to both [OAuth 2.0 authentication](https://trino.io/docs/current/security/oauth2.html) in the Trino docs and the [README](https://github.com/trinodb/trino-python-client#oauth2-authentication) for the Trino Python client. -The only difference between `oauth_console` and `oauth` is that in the latter a browser is automatically opened with authentication URL and in `oauth_console` URL is printed to the console. +The only difference between `oauth_console` and `oauth` is: +- `oauth` — An authentication URL automatically opens in a browser. +- `oauth_console` — A URL is printed to the console. It's recommended that you install `keyring` to cache the OAuth 2.0 token over multiple dbt invocations by running `python -m pip install 'trino[external-authentication-token-cache]'`. The `keyring` package is not installed by default. From be1c0eb7635e06604d838d081da45495e6e41b74 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 19 Dec 2023 08:22:39 -0500 Subject: [PATCH 016/204] Update docs.md fold in Doug's feedback --- .../docs/reference/resource-configs/docs.md | 28 ++++++++----------- 1 file changed, 12 insertions(+), 16 deletions(-) diff --git a/website/docs/reference/resource-configs/docs.md b/website/docs/reference/resource-configs/docs.md index b188753010f..d5f7b6499d8 100644 --- a/website/docs/reference/resource-configs/docs.md +++ b/website/docs/reference/resource-configs/docs.md @@ -23,31 +23,29 @@ default_value: {show: true} You can configure `docs` behavior for many resources at once by setting in `dbt_project.yml`. You can also use the `docs` config in `properties.yaml` files, to set or override documentation behaviors for specific resources: - - + ```yml -version: 2 - models: - - name: model_name - docs: + [](/reference/resource-configs/resource-path): + +docs: show: true | false - node_color: "black" + ``` - + + + ```yml +version: 2 -```yml models: - [](/reference/resource-configs/resource-path): - +docs: + - name: model_name + docs: show: true | false - + node_color: "black" ``` - @@ -153,8 +151,6 @@ macros: ``` -Also refer to [macro properties](/reference/macro-properties). - @@ -162,7 +158,7 @@ Also refer to [macro properties](/reference/macro-properties). ## Definition The docs field can be used to provide documentation-specific configuration to models. It supports the doc attribute `show`, which controls whether or not models are shown in the auto-generated documentation website. It also supports `node_color` for some node types. -**Note:** hidden models will still appear in the dbt DAG visualization but will be identified as "hidden.” +**Note:** Hidden models will still appear in the dbt DAG visualization but will be identified as "hidden.” ## Default The default value for `show` is `true`. From 326af16079cb867dbc9eb9a99f76df8dc9a4da4c Mon Sep 17 00:00:00 2001 From: Amy Chen Date: Tue, 19 Dec 2023 10:01:05 -0500 Subject: [PATCH 017/204] upload the guide --- .../2023-12-20-partner-integration-guide.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 website/blog/2023-12-20-partner-integration-guide.md diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md new file mode 100644 index 00000000000..0eed3302716 --- /dev/null +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -0,0 +1,102 @@ +--- +title: "How to integrate with dbt" +description: "This guide will cover the ways to integrate with dbt Cloud" +slug: integrating-with-dbtcloud + +authors: [amy_chen] + +tags: [dbt Cloud, Integrations, APIs] +hide_table_of_contents: false + +date: 2023-12-20 +is_featured: false +--- + + +## Overview + +Over the course of my 3 years running the Partner Engineering team at dbt Labs, the most common question I have been asked is “How do we integrate with dbt?”. Because those conversations often start out at the same place, I decided to create this guide so I’m no longer the blocker to fundamental information. This also allows us to skip the intro and get to the fun conversations like what a joint solution for our customers would look like so much faster. + +Now this guide does not include how to integrate with dbt Core. If you’re interested in creating an dbt Adapter, **[please check out this documentation instead.](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters)** + +Instead we are going to focus on integrating with dbt Cloud. Integrating with dbt Cloud is a key requirement to become a dbt Labs technology partner, opening the door to a variety of collaborative commercial opportunities. + +Here I will cover how to get started, potential use cases you want to solve for, and points of integrations to do so. + +## New to dbt Cloud? + +If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](quickstarts) after reading [What is dbt?](/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. + +If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. **This account may only be used for development, training, and demonstration purposes.** Please speak to your partner manager if you're interested and provide the account id (provided in the URL). Our partner account has all of the enterprise level functionality and can be provided with a signed partnerships agreement. + +## Integration Points + +- [Discovery API (formerly referred to as Metadata API)](/docs/dbt-cloud-apis/discovery-api) + - **Overview**: This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt Project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. +- [Administrative API (also referred to as the Admin API)](/docs/dbt-cloud-apis/admin-cloud-api) + - **Overview:** This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. +- Webhooks + - **Overview:** Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information on your dbt jobs in real time. + - [Link to documentation](/docs/deploy/webhooks) +- Semantic Layers/Metrics + - **Overview: Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](/guides/dbt-ecosystem/sl-partner-integration-guide).** + - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is the Discovery API is not able to pull the semantic graph which provides the list of available dimensions that one can query per metric. That is only available via the SL Driver/APIs. The tradeoff is the SL Driver/APIs does not have access to the lineage of the entire dbt project (i.e how the dbt metrics dependencies on dbt models) + - [We have three available integration points for the Semantic Layer API.](/docs/dbt-cloud-apis/sl-api-overview) + +## dbt Cloud Hosting and Authentication + +To use the dbt Cloud APIs, you will need access to the customer’s access urls. Depending on their dbt Cloud setup, they will have a different access url. To find out more, here is the [documentation](/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own url to simplify support. + +If the customer is on an Azure Single Tenant instance, they do not currently have access to the Discovery API or the Semantic Layer APIs. + +For authentication, we highly recommend that your integration uses account service tokens. You can read more about how to create a service token and what permission sets to provide it [here](/docs/dbt-cloud-apis/service-tokens). Please note depending on their plan type, they will have access to different permission sets. We **do not** recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with. + +## Potential Use Cases + +- Event-based orchestration + - **Desired Action:** You wish to receive information that a scheduled dbt Cloud Job has been completed or kick off a dbt Cloud job. You can align your product schedule to the dbt Cloud run schedule. + - **Examples:** Kicking off a dbt Job after the ETL job of extracting and loading the data is completed. Or receiving a webhook after the job has been completed to kick off your reverse ETL job. + - **Integration Points:** Webhooks and/or Admin API +- dbt Lineage + - **Desired Action:** You wish to interpolate the dbt lineage metadata into your tool. + - **Example: In your tool, you wish to pull in the dbt DAG into your lineage diagram. [This is what you could pull and how to do this.](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-full-data-lineage)** + - **Integration Points:** Discovery API +- dbt Environment/Job metadata + - **Desired Action:** You wish to interpolate dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. + - **Example:** In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. [This is what you could pull and how to do this.](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model) + - **Integration Points:** Discovery API +- dbt Model Documentation + - **Desired Action:** You wish to interpolate dbt Project Information, including model descriptions, column descriptions, etc. + - **Example:** You want to extract out the dbt model description so that you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. [This is what you could pull and how to do this.](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean) + - **Integration Points:** Discovery API + +**dbt Core only users will have no access to the above integration points.** For dbt metadata, oftentimes our partners will create a dbt core integration by using the [dbt artifacts](/product/semantic-layer/) files generated by each run and provided by the user. With our Discovery API, we are providing a dynamic way to get the latest up to date information, parsed out for you. + +## dbt Cloud Plans & Permissions + +[The dbt Cloud plan type](https://www.getdbt.com/pricing) will change what the user has access to. There are four different types of plans: + +- **Developer**: This is free and available to one user with a limited amount of successful models built. This plan cannot access the APIs, Webhooks, or Semantic Layer. Limited to 1 project. +- **Team:** This plan has access to the APIs, Webhooks, and Semantic Layer. You may have up to 8 users on the account and one dbt Cloud Project. This is limited to 15,000 successful models built. +- **Enterprise** (Multi-tenant/Multi-cell): This plan has access to the APIs, Webhooks, and Semantic Layer. They may have more than one dbt Cloud Project based on how many dbt projects/domains they have using dbt. Majority of our enterprise customers are on multi-tenant dbt Cloud instances. +- **Enterprise** (Single-tenant): This plan may have access to the APIs, Webhooks, and Semantic Layer. If you are working with a specific customer, let us know, and we can confirm if their instance has access. + +## Frequently Asked Questions + +- What is a dbt Cloud Project? + - A dbt Cloud project is made up of two connections: one to the git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud Project in their account but there are enterprise clients who might have more depending on their use cases.The project also encapsulates two types of environments at minimal: a development environment and deployment environment. + - Oftentimes folks refer to the [dbt Project](/docs/build/projects) as the code hosted in their git repository. +- What is a dbt Cloud Environment? + - [For an overview, check out this documentation.](/docs/environments-in-dbt) At minimal an project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI. +- Can we write back to the dbt project? + - At this moment, we do not have a Write API. A dbt project is hosted in a git repository, so if you have a git provider integration, you can manually open up a Pull Request on the project to maintain the version control process. +- Can you provide column-level information in the lineage? + - Column-level lineage is currently in beta release with more information to come. +- How do I get a Partner Account? + - Contact your Partner Manager with your account id (in your URL) +- Why should I not use the Admin API to pull out the dbt artifacts for metadata? + - We recommend not integrating with the Admin API to extract the dbt artifacts documentation. This is because the Discovery API provides more extensive information, a user-friendly structure and more reliable integration point. +- How do I get access to the dbt Brand assets? + - Check out this [page](https://www.getdbt.com/brand-guidelines/). Please make sure you’re not using our old logo(hint: there should only be one hole in the logo). Please also note that the name dbt and the dbt logo are trademarked by dbt Labs, and that use is governed by our brand guidelines - which are fairly specific for commercial uses. If you have any questions about proper use of our marks, please ask for your partner manager. +- How do I engage with the partnerships team? + - Email partnerships@dbtlabs.com. \ No newline at end of file From 1f14e32df332d5fa66bb59ee2c977f5bbd013fb7 Mon Sep 17 00:00:00 2001 From: Jordan Stein Date: Tue, 19 Dec 2023 09:11:13 -0800 Subject: [PATCH 018/204] update join page with multi-hop limitation --- website/docs/docs/build/join-logic.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/website/docs/docs/build/join-logic.md b/website/docs/docs/build/join-logic.md index 29b9d101a59..e701fd1b012 100644 --- a/website/docs/docs/build/join-logic.md +++ b/website/docs/docs/build/join-logic.md @@ -84,6 +84,10 @@ mf query --metrics average_purchase_price --dimensions metric_time,user_id__type ## Multi-hop joins +:::info +MetricFlow currently supports multi-hop joins with up to two hops. This means we can render joins between three tables at most. +::: + MetricFlow allows users to join measures and dimensions across a graph of entities, which we refer to as a 'multi-hop join.' This is because users can move from one table to another like a 'hop' within a graph. Here's an example schema for reference: From 7eaa763500508ad0caf58db97d679883c6289294 Mon Sep 17 00:00:00 2001 From: Jordan Date: Tue, 19 Dec 2023 09:18:14 -0800 Subject: [PATCH 019/204] Update website/docs/docs/build/join-logic.md Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- website/docs/docs/build/join-logic.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/join-logic.md b/website/docs/docs/build/join-logic.md index e701fd1b012..e2656692eee 100644 --- a/website/docs/docs/build/join-logic.md +++ b/website/docs/docs/build/join-logic.md @@ -85,7 +85,7 @@ mf query --metrics average_purchase_price --dimensions metric_time,user_id__type ## Multi-hop joins :::info -MetricFlow currently supports multi-hop joins with up to two hops. This means we can render joins between three tables at most. +MetricFlow can join three tables at most, supporting multi-hop joins with a limit of two hops. ::: MetricFlow allows users to join measures and dimensions across a graph of entities, which we refer to as a 'multi-hop join.' This is because users can move from one table to another like a 'hop' within a graph. From 4b7af55488c0fe2b5d2a710f744b8052028664a4 Mon Sep 17 00:00:00 2001 From: Amy Chen Date: Tue, 19 Dec 2023 12:40:33 -0500 Subject: [PATCH 020/204] fix broken links --- .../2023-12-20-partner-integration-guide.md | 32 +++++++++---------- website/blog/authors.yml | 2 +- 2 files changed, 16 insertions(+), 18 deletions(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 0eed3302716..f51181bf588 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -11,13 +11,11 @@ hide_table_of_contents: false date: 2023-12-20 is_featured: false --- - - ## Overview Over the course of my 3 years running the Partner Engineering team at dbt Labs, the most common question I have been asked is “How do we integrate with dbt?”. Because those conversations often start out at the same place, I decided to create this guide so I’m no longer the blocker to fundamental information. This also allows us to skip the intro and get to the fun conversations like what a joint solution for our customers would look like so much faster. -Now this guide does not include how to integrate with dbt Core. If you’re interested in creating an dbt Adapter, **[please check out this documentation instead.](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters)** +Now this guide does not include how to integrate with dbt Core. If you’re interested in creating an dbt Adapter, **[please check out this documentation instead.](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/1-what-are-adapters)** Instead we are going to focus on integrating with dbt Cloud. Integrating with dbt Cloud is a key requirement to become a dbt Labs technology partner, opening the door to a variety of collaborative commercial opportunities. @@ -25,31 +23,31 @@ Here I will cover how to get started, potential use cases you want to solve for, ## New to dbt Cloud? -If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](quickstarts) after reading [What is dbt?](/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. +If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](https://docs.getdbt.com/quickstarts) after reading [What is dbt?](https://docs.getdbt.com/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. **This account may only be used for development, training, and demonstration purposes.** Please speak to your partner manager if you're interested and provide the account id (provided in the URL). Our partner account has all of the enterprise level functionality and can be provided with a signed partnerships agreement. ## Integration Points -- [Discovery API (formerly referred to as Metadata API)](/docs/dbt-cloud-apis/discovery-api) +- [Discovery API (formerly referred to as Metadata API)](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-api) - **Overview**: This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt Project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. -- [Administrative API (also referred to as the Admin API)](/docs/dbt-cloud-apis/admin-cloud-api) +- [Administrative API (also referred to as the Admin API)](https://docs.getdbt.com/docs/dbt-cloud-apis/admin-cloud-api) - **Overview:** This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. - Webhooks - **Overview:** Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information on your dbt jobs in real time. - - [Link to documentation](/docs/deploy/webhooks) + - [Link to documentation](https://docs.getdbt.com/docs/deploy/webhooks) - Semantic Layers/Metrics - - **Overview: Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](/guides/dbt-ecosystem/sl-partner-integration-guide).** + - **Overview: Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](https://docs.getdbt.com/guides/dbt-ecosystem/sl-partner-integration-guide).** - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is the Discovery API is not able to pull the semantic graph which provides the list of available dimensions that one can query per metric. That is only available via the SL Driver/APIs. The tradeoff is the SL Driver/APIs does not have access to the lineage of the entire dbt project (i.e how the dbt metrics dependencies on dbt models) - - [We have three available integration points for the Semantic Layer API.](/docs/dbt-cloud-apis/sl-api-overview) + - [We have three available integration points for the Semantic Layer API.](https://docs.getdbt.com/docs/dbt-cloud-apis/sl-api-overview) ## dbt Cloud Hosting and Authentication -To use the dbt Cloud APIs, you will need access to the customer’s access urls. Depending on their dbt Cloud setup, they will have a different access url. To find out more, here is the [documentation](/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own url to simplify support. +To use the dbt Cloud APIs, you will need access to the customer’s access urls. Depending on their dbt Cloud setup, they will have a different access url. To find out more, here is the [documentation](https://docs.getdbt.com/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own url to simplify support. If the customer is on an Azure Single Tenant instance, they do not currently have access to the Discovery API or the Semantic Layer APIs. -For authentication, we highly recommend that your integration uses account service tokens. You can read more about how to create a service token and what permission sets to provide it [here](/docs/dbt-cloud-apis/service-tokens). Please note depending on their plan type, they will have access to different permission sets. We **do not** recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with. +For authentication, we highly recommend that your integration uses account service tokens. You can read more about how to create a service token and what permission sets to provide it [here](https://docs.getdbt.com/docs/dbt-cloud-apis/service-tokens). Please note depending on their plan type, they will have access to different permission sets. We **do not** recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with. ## Potential Use Cases @@ -59,18 +57,18 @@ For authentication, we highly recommend that your integration uses account servi - **Integration Points:** Webhooks and/or Admin API - dbt Lineage - **Desired Action:** You wish to interpolate the dbt lineage metadata into your tool. - - **Example: In your tool, you wish to pull in the dbt DAG into your lineage diagram. [This is what you could pull and how to do this.](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-full-data-lineage)** + - **Example: In your tool, you wish to pull in the dbt DAG into your lineage diagram. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-full-data-lineage)** - **Integration Points:** Discovery API - dbt Environment/Job metadata - **Desired Action:** You wish to interpolate dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. - - **Example:** In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. [This is what you could pull and how to do this.](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model) + - **Example:** In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model) - **Integration Points:** Discovery API - dbt Model Documentation - **Desired Action:** You wish to interpolate dbt Project Information, including model descriptions, column descriptions, etc. - - **Example:** You want to extract out the dbt model description so that you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. [This is what you could pull and how to do this.](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean) + - **Example:** You want to extract out the dbt model description so that you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean) - **Integration Points:** Discovery API -**dbt Core only users will have no access to the above integration points.** For dbt metadata, oftentimes our partners will create a dbt core integration by using the [dbt artifacts](/product/semantic-layer/) files generated by each run and provided by the user. With our Discovery API, we are providing a dynamic way to get the latest up to date information, parsed out for you. +**dbt Core only users will have no access to the above integration points.** For dbt metadata, oftentimes our partners will create a dbt core integration by using the [dbt artifacts](https://www.getdbt.com/product/semantic-layer/) files generated by each run and provided by the user. With our Discovery API, we are providing a dynamic way to get the latest up to date information, parsed out for you. ## dbt Cloud Plans & Permissions @@ -85,9 +83,9 @@ For authentication, we highly recommend that your integration uses account servi - What is a dbt Cloud Project? - A dbt Cloud project is made up of two connections: one to the git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud Project in their account but there are enterprise clients who might have more depending on their use cases.The project also encapsulates two types of environments at minimal: a development environment and deployment environment. - - Oftentimes folks refer to the [dbt Project](/docs/build/projects) as the code hosted in their git repository. + - Oftentimes folks refer to the [dbt Project](https://docs.getdbt.com/docs/build/projects) as the code hosted in their git repository. - What is a dbt Cloud Environment? - - [For an overview, check out this documentation.](/docs/environments-in-dbt) At minimal an project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI. + - [For an overview, check out this documentation.](https://docs.getdbt.com/docs/environments-in-dbt) At minimal an project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI. - Can we write back to the dbt project? - At this moment, we do not have a Write API. A dbt project is hosted in a git repository, so if you have a git provider integration, you can manually open up a Pull Request on the project to maintain the version control process. - Can you provide column-level information in the lineage? diff --git a/website/blog/authors.yml b/website/blog/authors.yml index cd2bd162935..82cc300bdc8 100644 --- a/website/blog/authors.yml +++ b/website/blog/authors.yml @@ -1,6 +1,6 @@ amy_chen: image_url: /img/blog/authors/achen.png - job_title: Staff Partner Engineer + job_title: Product Partnerships Manager links: - icon: fa-linkedin url: https://www.linkedin.com/in/yuanamychen/ From 7b0efb2a96e769b240915fd0d4bc7c43ce4495cd Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 19 Dec 2023 13:05:53 -0500 Subject: [PATCH 021/204] add discourse link + simplify language this pr adds a discourse link to incremental strategies discussion for large datasets and simplifies the 'when should i use an incremental model' paragraph/section. --- website/docs/docs/build/incremental-models.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md index 2a247263159..ed0e6b51f02 100644 --- a/website/docs/docs/build/incremental-models.md +++ b/website/docs/docs/build/incremental-models.md @@ -154,17 +154,21 @@ For detailed usage instructions, check out the [dbt run](/reference/commands/run # Understanding incremental models ## When should I use an incremental model? -It's often desirable to build models as tables in your data warehouse since downstream queries are more performant. While the `table` materialization also creates your models as tables, it rebuilds the table on each dbt run. These runs can become problematic in that they use a lot of compute when either: -* source data tables have millions, or even billions, of rows. -* the transformations on the source data are computationally expensive (that is, take a long time to execute), for example, complex Regex functions, or UDFs are being used to transform data. -Like many things in programming, incremental models are a trade-off between complexity and performance. While they are not as straightforward as the `view` and `table` materializations, they can lead to significantly better performance of your dbt runs. +Building models as tables in your data warehouse is often preferred for better query performance. However, using `table` materialization can be computationally intensive, especially when: + +- Source data has millions or billions of rows. +- Data transformations on the source data are computationally expensive (take a long time to execute) and complex, like using Regex or UDFs. + +Incremental models offer a balance between complexity and improved performance compared to `view` and `table` materializations and offer better performance of your dbt runs. + +In addition to these considerations for incremental models, it's important to understand their limits and challenges, particularly with large datasets. For more insights into efficient strategies, performance considerations, and the handling of late-arriving data in incremental models, refer to the [On the Limits of Incrementality](https://discourse.getdbt.com/t/on-the-limits-of-incrementality/303) discourse discussion. ## Understanding the is_incremental() macro The `is_incremental()` macro will return `True` if _all_ of the following conditions are met: * the destination table already exists in the database * dbt is _not_ running in full-refresh mode -* the running model is configured with `materialized='incremental'` +* The running model is configured with `materialized='incremental'` Note that the SQL in your model needs to be valid whether `is_incremental()` evaluates to `True` or `False`. From 03eb38de5abe022ef49a96ab7899bab68c882cc6 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 19 Dec 2023 13:26:37 -0500 Subject: [PATCH 022/204] Update website/docs/docs/build/incremental-models.md Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> --- website/docs/docs/build/incremental-models.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md index ed0e6b51f02..cc45290ae15 100644 --- a/website/docs/docs/build/incremental-models.md +++ b/website/docs/docs/build/incremental-models.md @@ -162,7 +162,7 @@ Building models as tables in your data warehouse is often preferred for better q Incremental models offer a balance between complexity and improved performance compared to `view` and `table` materializations and offer better performance of your dbt runs. -In addition to these considerations for incremental models, it's important to understand their limits and challenges, particularly with large datasets. For more insights into efficient strategies, performance considerations, and the handling of late-arriving data in incremental models, refer to the [On the Limits of Incrementality](https://discourse.getdbt.com/t/on-the-limits-of-incrementality/303) discourse discussion. +In addition to these considerations for incremental models, it's important to understand their limitations and challenges, particularly with large datasets. For more insights into efficient strategies, performance considerations, and the handling of late-arriving data in incremental models, refer to the [On the Limits of Incrementality](https://discourse.getdbt.com/t/on-the-limits-of-incrementality/303) discourse discussion. ## Understanding the is_incremental() macro The `is_incremental()` macro will return `True` if _all_ of the following conditions are met: From eb27064c4d942eb8266aebffcf03586fafbb546e Mon Sep 17 00:00:00 2001 From: Jordan Stein Date: Tue, 19 Dec 2023 14:15:28 -0800 Subject: [PATCH 023/204] add mf bug fixes and ambigous resolution --- .../release-notes/74-Dec-2023/dec-sl-updates.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 8f0bdd593c7..cc40dd88461 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -12,10 +12,19 @@ The dbt Labs team continues to work on adding new features, fixing bugs, and inc - The dbt Semantic Layer can support `BIGINT` with precision greater than 18. Previously it would return an error. - We fixed a memory leak that would amount in intermittent errors when querying our JDBC API. - Added support for converting various Redshift and Postgres specific data types. Previously, the driver would throw an error when encountering columns with those types. +- Apply time offset for nested dervied & ratio metrics ([#882](https://github.com/dbt-labs/metricflow/issues/882)) +- Fix Incorrect SQL Column Name Rendering for WhereConstraintNode ([#908](https://github.com/dbt-labs/metricflow/issues/908)) +- `Unable To Satisfy Query Error` with Cumulative Metrics in Saved Queries ([#917](https://github.com/dbt-labs/metricflow/issues/917)) +- Fixes a bug in dimension-only queries where the filter column is removed before the filter has been applied. ([#923](https://github.com/dbt-labs/metricflow/issues/923)) +- Bug fix: Keep where constraint column until used for nested derived offset metric queries. ([#930](https://github.com/dbt-labs/metricflow/issues/930)) ## Improvements - dbt Labs deprecated [dbt Metrics and the legacy dbt Semantic Layer](/docs/dbt-versions/release-notes/Dec-2023/legacy-sl), both supported on dbt version 1.5 or lower. This change came into effect on December 15th, 2023. - The [dbt converter tool](https://github.com/dbt-labs/dbt-converter) can now help automate some of the work in converting from LookML (Looker's modeling language) for those who are migrating. Previously this wasn’t available. ## New features -- Test +- Support for ambiguous group-by-item resolution. Previously, group-by-items were input by the user in a relatively specific form. For example, the group-by-item: +``` +guest__listing__created_at__month +``` +refers to the created_at time dimension at a month grain that is resolved by joining the measure source to the dimension sources by the guest and listing entities. Now we handle this complexity for the user, and allow you to simply request ``listing__created_at__month``. If there is only one possible resolution, we will resolve it for the user. If there are multiple possible resolutions, we will ask for additional user input. From 815a94e5d3609191f504c6a473d52bd3bbc8a9eb Mon Sep 17 00:00:00 2001 From: wusanny <141895372+wusanny@users.noreply.github.com> Date: Wed, 20 Dec 2023 10:40:20 +1100 Subject: [PATCH 024/204] Update job-scheduler.md --- website/docs/docs/deploy/job-scheduler.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/deploy/job-scheduler.md b/website/docs/docs/deploy/job-scheduler.md index fba76f677a7..1ace16f5ff5 100644 --- a/website/docs/docs/deploy/job-scheduler.md +++ b/website/docs/docs/deploy/job-scheduler.md @@ -31,7 +31,7 @@ Familiarize yourself with these useful terms to help you understand how the job | Over-scheduled job | A situation when a cron-scheduled job's run duration becomes longer than the frequency of the job’s schedule, resulting in a job queue that will grow faster than the scheduler can process the job’s runs. | | Prep time | The time dbt Cloud takes to create a short-lived environment to execute the job commands in the user's cloud data platform. Prep time varies most significantly at the top of the hour when the dbt Cloud Scheduler experiences a lot of run traffic. | | Run | A single, unique execution of a dbt job. | -| Run slot | Run slots control the number of jobs that can run concurrently. Developer and Team plan accounts have a fixed number of run slots, and Enterprise users have [unlimited run slots](/docs/dbt-versions/release-notes/July-2023/faster-run#unlimited-job-concurrency-for-enterprise-accounts). Each running job occupies a run slot for the duration of the run. If you need more jobs to execute in parallel, consider the [Enterprise plan](https://www.getdbt.com/pricing/) | +| Run slot | Run slots control the number of jobs that can run concurrently. Developer plan has a fixed number of run slots, and Enterprise & Team users have [unlimited run slots](/docs/dbt-versions/release-notes/July-2023/faster-run#unlimited-job-concurrency-for-enterprise-accounts). Each running job occupies a run slot for the duration of the run. If you need more jobs to execute in parallel, consider the [Enterprise plan](https://www.getdbt.com/pricing/) | | Threads | When dbt builds a project's DAG, it tries to parallelize the execution by using threads. The [thread](/docs/running-a-dbt-project/using-threads) count is the maximum number of paths through the DAG that dbt can work on simultaneously. The default thread count in a job is 4. | | Wait time | Amount of time that dbt Cloud waits before running a job, either because there are no available slots or because a previous run of the same job is still in progress. | From 71657295a252d6ab8c97e2b346dde8b519940cc0 Mon Sep 17 00:00:00 2001 From: Pat Kearns Date: Wed, 20 Dec 2023 14:21:48 +1100 Subject: [PATCH 025/204] Update snowflake-setup.md --- .../docs/docs/core/connect-data-platform/snowflake-setup.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/snowflake-setup.md b/website/docs/docs/core/connect-data-platform/snowflake-setup.md index 2b426ef667b..d9d4aa6f3cb 100644 --- a/website/docs/docs/core/connect-data-platform/snowflake-setup.md +++ b/website/docs/docs/core/connect-data-platform/snowflake-setup.md @@ -98,7 +98,8 @@ Along with adding the `authenticator` parameter, be sure to run `alter account s ### Key Pair Authentication -To use key pair authentication, omit a `password` and instead provide a `private_key_path` and, optionally, a `private_key_passphrase` in your target. **Note:** Versions of dbt before 0.16.0 required that private keys were encrypted and a `private_key_passphrase` was provided. This behavior was changed in dbt v0.16.0. +To use key pair authentication, omit a `password` and instead provide a `private_key_path` and, optionally, a `private_key_passphrase`. +**Note:** Versions of dbt before 0.16.0 required that private keys were encrypted and a `private_key_passphrase` was provided. Since dbt 0.16.0, unencrypted private keys are allowed. Only add the passphrase if necessary. Starting from [dbt v1.5.0](/docs/dbt-versions/core), you have the option to use a `private_key` string instead of a `private_key_path`. The `private_key` string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to [Snowflake documentation](https://docs.snowflake.com/developer-guide/python-connector/python-connector-example#using-key-pair-authentication-key-pair-rotation) for more info on how they generate the key. From 3f3e4378a55c5a364f9ae10001769ea152a285ae Mon Sep 17 00:00:00 2001 From: Pat Kearns Date: Wed, 20 Dec 2023 14:29:41 +1100 Subject: [PATCH 026/204] Update connect-snowflake.md --- .../docs/cloud/connect-data-platform/connect-snowflake.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md index 5f1c4cae725..0de67e17d9d 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md @@ -42,10 +42,10 @@ alter user jsmith set rsa_public_key='MIIBIjANBgkqh...'; ``` 2. Finally, set the **Private Key** and **Private Key Passphrase** fields in the **Credentials** page to finish configuring dbt Cloud to authenticate with Snowflake using a key pair. - - **Note:** At this time ONLY Encrypted Private Keys are supported by dbt Cloud, and the keys must be of size 4096 or smaller. + **Note:** Since dbt 0.16.0, unencrypted private keys are allowed. Only add the passphrase if necessary. + Starting from dbt v1.5.0, you have the option to use a private_key string instead of a private_key_path. The private_key string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to Snowflake documentation for more info on how they generate the key. -3. To successfully fill in the Private Key field, you **must** include commented lines when you add the passphrase. Leaving the **Private Key Passphrase** field empty will return an error. If you're receiving a `Could not deserialize key data` or `JWT token` error, refer to [Troubleshooting](#troubleshooting) for more info. +4. To successfully fill in the Private Key field, you **must** include commented lines. If you're receiving a `Could not deserialize key data` or `JWT token` error, refer to [Troubleshooting](#troubleshooting) for more info. **Example:** From 71e2cd7fe375ab4593e732ecb026317200bc3410 Mon Sep 17 00:00:00 2001 From: Pat Kearns Date: Wed, 20 Dec 2023 14:31:29 +1100 Subject: [PATCH 027/204] Update connect-snowflake.md remove v from version --- .../docs/docs/cloud/connect-data-platform/connect-snowflake.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md index 0de67e17d9d..34b69f56c27 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md @@ -43,7 +43,7 @@ alter user jsmith set rsa_public_key='MIIBIjANBgkqh...'; 2. Finally, set the **Private Key** and **Private Key Passphrase** fields in the **Credentials** page to finish configuring dbt Cloud to authenticate with Snowflake using a key pair. **Note:** Since dbt 0.16.0, unencrypted private keys are allowed. Only add the passphrase if necessary. - Starting from dbt v1.5.0, you have the option to use a private_key string instead of a private_key_path. The private_key string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to Snowflake documentation for more info on how they generate the key. + Starting from dbt 1.5.0, you have the option to use a private_key string instead of a private_key_path. The private_key string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to Snowflake documentation for more info on how they generate the key. 4. To successfully fill in the Private Key field, you **must** include commented lines. If you're receiving a `Could not deserialize key data` or `JWT token` error, refer to [Troubleshooting](#troubleshooting) for more info. From 73195850c5e8ce5e76893d1bd6defc50ad163b09 Mon Sep 17 00:00:00 2001 From: Benoit Perigaud <8754100+b-per@users.noreply.github.com> Date: Wed, 20 Dec 2023 11:08:48 +0100 Subject: [PATCH 028/204] Update spark-setup.md Fix incorrect rendering of heading --- website/docs/docs/core/connect-data-platform/spark-setup.md | 1 + 1 file changed, 1 insertion(+) diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 93595cea3f6..992dc182b75 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -204,6 +204,7 @@ connect_retries: 3 + ### Server side configuration Spark can be customized using [Application Properties](https://spark.apache.org/docs/latest/configuration.html). Using these properties the execution can be customized, for example, to allocate more memory to the driver process. Also, the Spark SQL runtime can be set through these properties. For example, this allows the user to [set a Spark catalogs](https://spark.apache.org/docs/latest/configuration.html#spark-sql). From 458a79e0e85e7b581c55b2f3101f2ae01dcce1bf Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 20 Dec 2023 07:26:58 -0500 Subject: [PATCH 029/204] Update warehouse-setups-cloud-callout.md --- website/snippets/warehouse-setups-cloud-callout.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/snippets/warehouse-setups-cloud-callout.md b/website/snippets/warehouse-setups-cloud-callout.md index 3bc1147a637..56edd3a96ea 100644 --- a/website/snippets/warehouse-setups-cloud-callout.md +++ b/website/snippets/warehouse-setups-cloud-callout.md @@ -1,3 +1,3 @@ -:::info `profiles.yml` file is for CLI users only -If you're using dbt Cloud, you don't need to create a `profiles.yml` file. This file is only for CLI users. To connect your data platform to dbt Cloud, refer to [About data platforms](/docs/cloud/connect-data-platform/about-connections). +:::info `profiles.yml` file is for dbt Core users only +If you're using dbt Cloud, you don't need to create a `profiles.yml` file. This file is only for dbt Core users. To connect your data platform to dbt Cloud, refer to [About data platforms](/docs/cloud/connect-data-platform/about-connections). ::: From 49f6f1e388a333be4834b459113dbfb216b60656 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 20 Dec 2023 07:27:32 -0500 Subject: [PATCH 030/204] Update spark-setup.md --- website/docs/docs/core/connect-data-platform/spark-setup.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 992dc182b75..9d9e0c9d5fb 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -20,10 +20,6 @@ meta: -:::note -See [Databricks setup](#databricks-setup) for the Databricks version of this page. -::: - import SetUpPages from '/snippets/_setup-pages-intro.md'; From be3ddf8d32e754d2c6a2478d126d5646fd4f21e3 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 20 Dec 2023 07:32:37 -0500 Subject: [PATCH 031/204] Update dbt-databricks-for-databricks.md --- website/snippets/dbt-databricks-for-databricks.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/website/snippets/dbt-databricks-for-databricks.md b/website/snippets/dbt-databricks-for-databricks.md index f1c5ec84af1..acb0b111aaf 100644 --- a/website/snippets/dbt-databricks-for-databricks.md +++ b/website/snippets/dbt-databricks-for-databricks.md @@ -1,4 +1,5 @@ -:::info If you're using Databricks, use `dbt-databricks` -If you're using Databricks, the `dbt-databricks` adapter is recommended over `dbt-spark`. -If you're still using dbt-spark with Databricks consider [migrating from the dbt-spark adapter to the dbt-databricks adapter](/guides/migrate-from-spark-to-databricks). +:::tip If you're using Databricks, use `dbt-databricks` +If you're using Databricks, the `dbt-databricks` adapter is recommended over `dbt-spark`. If you're still using dbt-spark with Databricks consider [migrating from the dbt-spark adapter to the dbt-databricks adapter](/guides/migrate-from-spark-to-databricks). + +For the Databricks version of this page, refer to [Databricks setup](#databricks-setup). ::: From 9d2094267cbaf1601ed218865b748ea1c4eca5a4 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 20 Dec 2023 07:33:15 -0500 Subject: [PATCH 032/204] Update website/snippets/dbt-databricks-for-databricks.md --- website/snippets/dbt-databricks-for-databricks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/dbt-databricks-for-databricks.md b/website/snippets/dbt-databricks-for-databricks.md index acb0b111aaf..1e18da33d42 100644 --- a/website/snippets/dbt-databricks-for-databricks.md +++ b/website/snippets/dbt-databricks-for-databricks.md @@ -1,4 +1,4 @@ -:::tip If you're using Databricks, use `dbt-databricks` +:::info If you're using Databricks, use `dbt-databricks` If you're using Databricks, the `dbt-databricks` adapter is recommended over `dbt-spark`. If you're still using dbt-spark with Databricks consider [migrating from the dbt-spark adapter to the dbt-databricks adapter](/guides/migrate-from-spark-to-databricks). For the Databricks version of this page, refer to [Databricks setup](#databricks-setup). From b595d2cef420b5381991d70de9eb3ba2f2f765df Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 07:51:38 -0500 Subject: [PATCH 033/204] remove dup --- website/sidebars.js | 1 - 1 file changed, 1 deletion(-) diff --git a/website/sidebars.js b/website/sidebars.js index a82b2e06ec2..23a58360bbc 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -135,7 +135,6 @@ const sidebarSettings = { "docs/cloud/secure/redshift-privatelink", "docs/cloud/secure/postgres-privatelink", "docs/cloud/secure/vcs-privatelink", - "docs/cloud/secure/ip-restrictions", ], }, // PrivateLink "docs/cloud/billing", From f412756ea467c99cef528af63cfde64b1927c016 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 08:24:04 -0500 Subject: [PATCH 034/204] add files to sidebar --- website/sidebars.js | 2 ++ 1 file changed, 2 insertions(+) diff --git a/website/sidebars.js b/website/sidebars.js index 23a58360bbc..6bb630037c1 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -1027,6 +1027,8 @@ const sidebarSettings = { id: "best-practices/how-we-build-our-metrics/semantic-layer-1-intro", }, items: [ + "best-practices/how-we-build-our-metrics/semantic-layer-1-intro", + "best-practices/how-we-build-our-metrics/semantic-layer-2-setup", "best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models", "best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics", "best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart", From 6125f3f686e986a76fd3922b33e6fcfad3ca68e9 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 08:45:13 -0500 Subject: [PATCH 035/204] add missing pages to sidebar --- .../semantic-layer-2-setup.md | 25 ++++++++++++++++--- .../docs/docs/build/metricflow-commands.md | 11 ++++---- 2 files changed, 27 insertions(+), 9 deletions(-) diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md index 6e9153a3780..275395f6b18 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md @@ -13,9 +13,23 @@ git clone git@github.com:dbt-labs/jaffle-sl-template.git cd path/to/project ``` -Next, before you start writing code, you need to install MetricFlow as an extension of a dbt adapter from PyPI (dbt Core users only). The MetricFlow is compatible with Python versions 3.8 through 3.11. +Next, before you start writing code, you need to install MetricFlow: -We'll use pip to install MetricFlow and our dbt adapter: + + + + +- [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) — MetricFlow commands are embedded in the dbt Cloud CLI. This means you can immediately run them once you install the dbt Cloud CLI. Using dbt Cloud means you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning. + +- [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) — You can create metrics using MetricFlow in the dbt Cloud IDE. However, support for running MetricFlow commands in the IDE will be available soon. + + + + + +- Download MetricFlow as an extension of a dbt adapter from PyPI (dbt Core users only). The MetricFlow is compatible with Python versions 3.8 through 3.11. + - **Note**, you'll need to manage versioning between dbt Core, your adapter, and MetricFlow. +- We'll use pip to install MetricFlow and our dbt adapter: ```shell # activate a virtual environment for your project, @@ -27,13 +41,16 @@ python -m pip install "dbt-metricflow[adapter name]" # e.g. python -m pip install "dbt-metricflow[snowflake]" ``` -Lastly, to get to the pre-Semantic Layer starting state, checkout the `start-here` branch. + + + +- Now that you're ready to use MetricFlow, get to the pre-Semantic Layer starting state by checking out the `start-here` branch: ```shell git checkout start-here ``` -For more information, refer to the [MetricFlow commands](/docs/build/metricflow-commands) or a [quickstart](/guides) to get more familiar with setting up a dbt project. +For more information, refer to the [MetricFlow commands](/docs/build/metricflow-commands) or a [quickstart guides](/guides) to get more familiar with setting up a dbt project. ## Basic commands diff --git a/website/docs/docs/build/metricflow-commands.md b/website/docs/docs/build/metricflow-commands.md index e3bb93da964..a0964269e68 100644 --- a/website/docs/docs/build/metricflow-commands.md +++ b/website/docs/docs/build/metricflow-commands.md @@ -17,15 +17,16 @@ MetricFlow is compatible with Python versions 3.8, 3.9, 3.10, and 3.11. MetricFlow is a dbt package that allows you to define and query metrics in your dbt project. You can use MetricFlow to query metrics in your dbt project in the dbt Cloud CLI, dbt Cloud IDE, or dbt Core. -**Note** — MetricFlow commands aren't supported in dbt Cloud jobs yet. However, you can add MetricFlow validations with your git provider (such as GitHub Actions) by installing MetricFlow (`python -m pip install metricflow`). This allows you to run MetricFlow commands as part of your continuous integration checks on PRs. +Using MetricFlow with dbt Cloud means you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning. + +**dbt Cloud jobs** — MetricFlow commands aren't supported in dbt Cloud jobs yet. However, you can add MetricFlow validations with your git provider (such as GitHub Actions) by installing MetricFlow (`python -m pip install metricflow`). This allows you to run MetricFlow commands as part of your continuous integration checks on PRs. -MetricFlow commands are embedded in the dbt Cloud CLI, which means you can immediately run them once you install the dbt Cloud CLI. - -A benefit to using the dbt Cloud is that you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning. +- MetricFlow commands are embedded in the dbt Cloud CLI. This means you can immediately run them once you install the dbt Cloud CLI and don't need to install MetricFlow separately. +- You don't need to manage versioning — your dbt Cloud account will automatically manage the versioning for you. @@ -35,7 +36,7 @@ A benefit to using the dbt Cloud is that you won't need to manage versioning &md You can create metrics using MetricFlow in the dbt Cloud IDE. However, support for running MetricFlow commands in the IDE will be available soon. ::: -A benefit to using the dbt Cloud is that you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning. + From 9dccece8d3a0009f8af6d64b6bdc29e9b8bb8764 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 20 Dec 2023 10:16:30 -0500 Subject: [PATCH 036/204] Update connect-snowflake.md From 47b0043dee4eca006013bd080603cc6e0fef4ccd Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 14:52:21 -0500 Subject: [PATCH 037/204] tweaks --- .../74-Dec-2023/dec-sl-updates.md | 52 +++++++++++-------- 1 file changed, 31 insertions(+), 21 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index cc40dd88461..605683ed4c4 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -1,30 +1,40 @@ --- -title: "Updates and fixes: dbt Semantic Layer and MetricFlow updates for December 2023." +title: "dbt Semantic Layer and MetricFlow updates for December 2023" description: "December 2023: Enhanced Tableau integration, BIGINT support, LookML to MetricFlow conversion, and deprecation of legacy features." -sidebar_label: "Update ad fixes: dbt Semantic Layer and MetricFlow." +sidebar_label: "Update ad fixes: dbt Semantic Layer and MetricFlow" sidebar_position: 08 date: 2023-12-22 --- -The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer and MetricFlow. Here are the updates and fixes for December 2023. - -## Bug fixes -- The dbt Semantic Layer integration with Tableau now supports queries that resolve to a "NOT IN" clause (for example: using "exclude" in the filtering user interface). Previously it wasn’t supported. -- The dbt Semantic Layer can support `BIGINT` with precision greater than 18. Previously it would return an error. -- We fixed a memory leak that would amount in intermittent errors when querying our JDBC API. -- Added support for converting various Redshift and Postgres specific data types. Previously, the driver would throw an error when encountering columns with those types. -- Apply time offset for nested dervied & ratio metrics ([#882](https://github.com/dbt-labs/metricflow/issues/882)) -- Fix Incorrect SQL Column Name Rendering for WhereConstraintNode ([#908](https://github.com/dbt-labs/metricflow/issues/908)) -- `Unable To Satisfy Query Error` with Cumulative Metrics in Saved Queries ([#917](https://github.com/dbt-labs/metricflow/issues/917)) -- Fixes a bug in dimension-only queries where the filter column is removed before the filter has been applied. ([#923](https://github.com/dbt-labs/metricflow/issues/923)) -- Bug fix: Keep where constraint column until used for nested derived offset metric queries. ([#930](https://github.com/dbt-labs/metricflow/issues/930)) +The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer and MetricFlow. + +Refer to the following updates and fixes for December 2023: + +## gBug fixes + +The following are fixes for the dbt Semantic Layer and MetricFlow: + +**dbt Semantic Layer** + +- Tableau integration — The dbt Semantic Layer integration with Tableau now supports queries that resolve to a "NOT IN" clause. This applies to using "exclude" in the filtering user interface. Previously it wasn’t supported. +- `BIGINT` support — The dbt Semantic Layer can now support `BIGINT` values with precision greater than 18. Previously it would return an error. +- Memory leak — We fixed a memory leak in the JDBC API that would previously lead to intermittent errors when querying it. +- Data conversion support — Added support for converting various Redshift and Postgres-specific data types. Previously, the driver would throw an error when encountering columns with those types. + +**MetricFlow** + +- Time offset for nested metrics — Implemented time offset for nested derived and ratio metrics. ([MetricFlow Issue #882](https://github.com/dbt-labs/metricflow/issues/882)) +- SQL column name rendering: — Fixed incorrect SQL column name rendering in `WhereConstraintNode`. ([MetricFlow Issue #908](https://github.com/dbt-labs/metricflow/issues/908)) +- Cumulative metrics query error — Fixed the `Unable To Satisfy Query` error with cumulative metrics in Saved Queries. ([MetricFlow Issue #917](https://github.com/dbt-labs/metricflow/issues/917)) +- Dimension-only query — Fixes a bug in dimension-only queries where the filter column is removed before the filter has been applied. ([MetricFlow Issue #923](https://github.com/dbt-labs/metricflow/issues/923)) +- Where constraint column — Ensured retention of the where constraint column until used for nested derived offset metric queries. ([MetricFlow Issue #930](https://github.com/dbt-labs/metricflow/issues/930)) ## Improvements -- dbt Labs deprecated [dbt Metrics and the legacy dbt Semantic Layer](/docs/dbt-versions/release-notes/Dec-2023/legacy-sl), both supported on dbt version 1.5 or lower. This change came into effect on December 15th, 2023. -- The [dbt converter tool](https://github.com/dbt-labs/dbt-converter) can now help automate some of the work in converting from LookML (Looker's modeling language) for those who are migrating. Previously this wasn’t available. + +- Deprecation — We deprecated [dbt Metrics and the legacy dbt Semantic Layer](/docs/dbt-versions/release-notes/Dec-2023/legacy-sl), both supported on dbt version 1.5 or lower. This change came into effect on December 15th, 2023. +- Improved dbt converter tool — The [dbt converter tool](https://github.com/dbt-labs/dbt-converter) can now help automate some of the work in converting from LookML (Looker's modeling language) for those who are migrating. Previously this wasn’t available. ## New features -- Support for ambiguous group-by-item resolution. Previously, group-by-items were input by the user in a relatively specific form. For example, the group-by-item: -``` -guest__listing__created_at__month -``` -refers to the created_at time dimension at a month grain that is resolved by joining the measure source to the dimension sources by the guest and listing entities. Now we handle this complexity for the user, and allow you to simply request ``listing__created_at__month``. If there is only one possible resolution, we will resolve it for the user. If there are multiple possible resolutions, we will ask for additional user input. + +- Simplified group-by-item requests — Improved support for ambiguous group-by-item resolution. Previously, you need to specify them in detail, like `guest__listing__created_at__month`. This indicates a monthly `created_at` time dimension, linked by `guest` and `listing` entities. + + Now you can use a shorter form, like ` listing__created_at__month`. If there's only one way to interpret this, dbt will resolve it automatically. If multiple interpretations are possible, dbt will ask for more details from the user. From 92ac11846dbadb4642bf5e510a8f14339b55fea7 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 14:53:34 -0500 Subject: [PATCH 038/204] consistent language --- .../release-notes/74-Dec-2023/dec-sl-updates.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 605683ed4c4..6895dce1f1a 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -9,7 +9,7 @@ The dbt Labs team continues to work on adding new features, fixing bugs, and inc Refer to the following updates and fixes for December 2023: -## gBug fixes +## Bug fixes The following are fixes for the dbt Semantic Layer and MetricFlow: @@ -17,7 +17,7 @@ The following are fixes for the dbt Semantic Layer and MetricFlow: - Tableau integration — The dbt Semantic Layer integration with Tableau now supports queries that resolve to a "NOT IN" clause. This applies to using "exclude" in the filtering user interface. Previously it wasn’t supported. - `BIGINT` support — The dbt Semantic Layer can now support `BIGINT` values with precision greater than 18. Previously it would return an error. -- Memory leak — We fixed a memory leak in the JDBC API that would previously lead to intermittent errors when querying it. +- Memory leak — Fixed a memory leak in the JDBC API that would previously lead to intermittent errors when querying it. - Data conversion support — Added support for converting various Redshift and Postgres-specific data types. Previously, the driver would throw an error when encountering columns with those types. **MetricFlow** @@ -25,7 +25,7 @@ The following are fixes for the dbt Semantic Layer and MetricFlow: - Time offset for nested metrics — Implemented time offset for nested derived and ratio metrics. ([MetricFlow Issue #882](https://github.com/dbt-labs/metricflow/issues/882)) - SQL column name rendering: — Fixed incorrect SQL column name rendering in `WhereConstraintNode`. ([MetricFlow Issue #908](https://github.com/dbt-labs/metricflow/issues/908)) - Cumulative metrics query error — Fixed the `Unable To Satisfy Query` error with cumulative metrics in Saved Queries. ([MetricFlow Issue #917](https://github.com/dbt-labs/metricflow/issues/917)) -- Dimension-only query — Fixes a bug in dimension-only queries where the filter column is removed before the filter has been applied. ([MetricFlow Issue #923](https://github.com/dbt-labs/metricflow/issues/923)) +- Dimension-only query — Fixed a bug in dimension-only queries where the filter column is removed before the filter has been applied. ([MetricFlow Issue #923](https://github.com/dbt-labs/metricflow/issues/923)) - Where constraint column — Ensured retention of the where constraint column until used for nested derived offset metric queries. ([MetricFlow Issue #930](https://github.com/dbt-labs/metricflow/issues/930)) ## Improvements From 89aeae466e9b3cad71ce85cbc1ec4672663c4be8 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 14:54:31 -0500 Subject: [PATCH 039/204] typo --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 6895dce1f1a..8fce5b837c7 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -1,7 +1,7 @@ --- title: "dbt Semantic Layer and MetricFlow updates for December 2023" description: "December 2023: Enhanced Tableau integration, BIGINT support, LookML to MetricFlow conversion, and deprecation of legacy features." -sidebar_label: "Update ad fixes: dbt Semantic Layer and MetricFlow" +sidebar_label: "Update and fixes: dbt Semantic Layer and MetricFlow" sidebar_position: 08 date: 2023-12-22 --- From 27cd55515fa34e88666ce3e1290fd89cd7708818 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 14:56:13 -0500 Subject: [PATCH 040/204] tweak --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 8fce5b837c7..96a1e20fc6b 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -11,7 +11,7 @@ Refer to the following updates and fixes for December 2023: ## Bug fixes -The following are fixes for the dbt Semantic Layer and MetricFlow: +The following are updates for the dbt Semantic Layer and MetricFlow: **dbt Semantic Layer** From 0d4ec7716f917b12f54102b40e08a68dbda468cb Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:45:35 -0500 Subject: [PATCH 041/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index f51181bf588..df42765825d 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -13,7 +13,7 @@ is_featured: false --- ## Overview -Over the course of my 3 years running the Partner Engineering team at dbt Labs, the most common question I have been asked is “How do we integrate with dbt?”. Because those conversations often start out at the same place, I decided to create this guide so I’m no longer the blocker to fundamental information. This also allows us to skip the intro and get to the fun conversations like what a joint solution for our customers would look like so much faster. +Over the course of my three years running the Partner Engineering team at dbt Labs, the most common question I've been asked is, How do we integrate with dbt? Because those conversations often start out at the same place, I decided to create this guide so I’m no longer the blocker to fundamental information. This also allows us to skip the intro and get to the fun conversations so much faster, like what a joint solution for our customers would look like. Now this guide does not include how to integrate with dbt Core. If you’re interested in creating an dbt Adapter, **[please check out this documentation instead.](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/1-what-are-adapters)** From 44bf2cf36d86912636a47af580551b5f2165c711 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:45:53 -0500 Subject: [PATCH 042/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index df42765825d..d589bf76dd4 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -17,7 +17,7 @@ Over the course of my three years running the Partner Engineering team at dbt La Now this guide does not include how to integrate with dbt Core. If you’re interested in creating an dbt Adapter, **[please check out this documentation instead.](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/1-what-are-adapters)** -Instead we are going to focus on integrating with dbt Cloud. Integrating with dbt Cloud is a key requirement to become a dbt Labs technology partner, opening the door to a variety of collaborative commercial opportunities. +Instead, we're going to focus on integrating with dbt Cloud. Integrating with dbt Cloud is a key requirement to become a dbt Labs technology partner, opening the door to a variety of collaborative commercial opportunities. Here I will cover how to get started, potential use cases you want to solve for, and points of integrations to do so. From c28ecb48016971a6e28e61c476ae12bfae85a550 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:46:10 -0500 Subject: [PATCH 043/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index d589bf76dd4..5353ca996fd 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -19,7 +19,7 @@ Now this guide does not include how to integrate with dbt Core. If you’re inte Instead, we're going to focus on integrating with dbt Cloud. Integrating with dbt Cloud is a key requirement to become a dbt Labs technology partner, opening the door to a variety of collaborative commercial opportunities. -Here I will cover how to get started, potential use cases you want to solve for, and points of integrations to do so. +Here I'll cover how to get started, potential use cases you want to solve for, and points of integrations to do so. ## New to dbt Cloud? From 402aee56cf622a0ef9d8d12c53afd3a1db2e249e Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:46:31 -0500 Subject: [PATCH 044/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 5353ca996fd..a48f18ef7cc 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -23,7 +23,7 @@ Here I'll cover how to get started, potential use cases you want to solve for, a ## New to dbt Cloud? -If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](https://docs.getdbt.com/quickstarts) after reading [What is dbt?](https://docs.getdbt.com/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. +If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](/guides) after reading [What is dbt?](/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. **This account may only be used for development, training, and demonstration purposes.** Please speak to your partner manager if you're interested and provide the account id (provided in the URL). Our partner account has all of the enterprise level functionality and can be provided with a signed partnerships agreement. From 434e7c0ce23345b6d73146fd3ba7875df6751e4f Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:46:48 -0500 Subject: [PATCH 045/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index a48f18ef7cc..0db135cf9bf 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -27,7 +27,7 @@ If you're new to dbt and dbt Cloud, we recommend you and your software developer If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. **This account may only be used for development, training, and demonstration purposes.** Please speak to your partner manager if you're interested and provide the account id (provided in the URL). Our partner account has all of the enterprise level functionality and can be provided with a signed partnerships agreement. -## Integration Points +## Integration points - [Discovery API (formerly referred to as Metadata API)](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-api) - **Overview**: This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt Project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. From 74e6fbbd25ad0943ad354c0a406833e7738d5fb7 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:47:13 -0500 Subject: [PATCH 046/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 0db135cf9bf..85424a1219a 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -29,7 +29,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin ## Integration points -- [Discovery API (formerly referred to as Metadata API)](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-api) +- [Discovery API (formerly referred to as Metadata API)](/docs/dbt-cloud-apis/discovery-api) - **Overview**: This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt Project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. - [Administrative API (also referred to as the Admin API)](https://docs.getdbt.com/docs/dbt-cloud-apis/admin-cloud-api) - **Overview:** This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. From 6d215a8d793ba675cb53242de01eda58e7c4559c Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:47:28 -0500 Subject: [PATCH 047/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 85424a1219a..ec70e770431 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -25,7 +25,7 @@ Here I'll cover how to get started, potential use cases you want to solve for, a If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](/guides) after reading [What is dbt?](/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. -If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. **This account may only be used for development, training, and demonstration purposes.** Please speak to your partner manager if you're interested and provide the account id (provided in the URL). Our partner account has all of the enterprise level functionality and can be provided with a signed partnerships agreement. +If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. This account may only be used for development, training, and demonstration purposes. Please contact your partner manager if you're interested and provide the account ID (provided in the URL). Our partner account includes all of the enterprise level functionality and can be provided with a signed partnerships agreement. ## Integration points From 74a53e22e95c1bc38c493e6d1cb01b65c2d3aa77 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:49:55 -0500 Subject: [PATCH 048/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index ec70e770431..af93fbdae34 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -30,7 +30,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin ## Integration points - [Discovery API (formerly referred to as Metadata API)](/docs/dbt-cloud-apis/discovery-api) - - **Overview**: This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt Project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. + - **Overview** — This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. - [Administrative API (also referred to as the Admin API)](https://docs.getdbt.com/docs/dbt-cloud-apis/admin-cloud-api) - **Overview:** This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. - Webhooks From ec77ce086f441682332a8924f54799fcea6160b3 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:50:14 -0500 Subject: [PATCH 049/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index af93fbdae34..491b88acc12 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -15,7 +15,7 @@ is_featured: false Over the course of my three years running the Partner Engineering team at dbt Labs, the most common question I've been asked is, How do we integrate with dbt? Because those conversations often start out at the same place, I decided to create this guide so I’m no longer the blocker to fundamental information. This also allows us to skip the intro and get to the fun conversations so much faster, like what a joint solution for our customers would look like. -Now this guide does not include how to integrate with dbt Core. If you’re interested in creating an dbt Adapter, **[please check out this documentation instead.](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/1-what-are-adapters)** +This guide doesn't include how to integrate with dbt Core. If you’re interested in creating a dbt adapter, please check out the [adapter development guide](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) instead. Instead, we're going to focus on integrating with dbt Cloud. Integrating with dbt Cloud is a key requirement to become a dbt Labs technology partner, opening the door to a variety of collaborative commercial opportunities. From d652d75ee0ed3e3c815748e6da6f04750061dca5 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:50:32 -0500 Subject: [PATCH 050/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 491b88acc12..f4e06eb6fa6 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -33,7 +33,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - **Overview** — This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. - [Administrative API (also referred to as the Admin API)](https://docs.getdbt.com/docs/dbt-cloud-apis/admin-cloud-api) - **Overview:** This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. -- Webhooks +- [Webhooks](/docs/deploy/webhooks) - **Overview:** Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information on your dbt jobs in real time. - [Link to documentation](https://docs.getdbt.com/docs/deploy/webhooks) - Semantic Layers/Metrics From bd6426d970aba27c9101e2305921e2aada1aa43d Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:50:47 -0500 Subject: [PATCH 051/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index f4e06eb6fa6..7ad14063a29 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -34,7 +34,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - [Administrative API (also referred to as the Admin API)](https://docs.getdbt.com/docs/dbt-cloud-apis/admin-cloud-api) - **Overview:** This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. - [Webhooks](/docs/deploy/webhooks) - - **Overview:** Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information on your dbt jobs in real time. + - **Overview** — Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information about your dbt jobs in real time. - [Link to documentation](https://docs.getdbt.com/docs/deploy/webhooks) - Semantic Layers/Metrics - **Overview: Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](https://docs.getdbt.com/guides/dbt-ecosystem/sl-partner-integration-guide).** From d29101c0776373d7149e12691e441fa08afc5067 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:51:02 -0500 Subject: [PATCH 052/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 7ad14063a29..03af0c4e3d0 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -32,7 +32,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - [Discovery API (formerly referred to as Metadata API)](/docs/dbt-cloud-apis/discovery-api) - **Overview** — This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. - [Administrative API (also referred to as the Admin API)](https://docs.getdbt.com/docs/dbt-cloud-apis/admin-cloud-api) - - **Overview:** This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. + - **Overview** — This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. - [Webhooks](/docs/deploy/webhooks) - **Overview** — Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information about your dbt jobs in real time. - [Link to documentation](https://docs.getdbt.com/docs/deploy/webhooks) From dce49403d18df90d11561cd296315960b745864f Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:51:18 -0500 Subject: [PATCH 053/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 03af0c4e3d0..8891f89cd82 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -36,7 +36,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - [Webhooks](/docs/deploy/webhooks) - **Overview** — Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information about your dbt jobs in real time. - [Link to documentation](https://docs.getdbt.com/docs/deploy/webhooks) -- Semantic Layers/Metrics +- [Semantic Layers/Metrics](/docs/dbt-cloud-apis/sl-api-overview) - **Overview: Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](https://docs.getdbt.com/guides/dbt-ecosystem/sl-partner-integration-guide).** - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is the Discovery API is not able to pull the semantic graph which provides the list of available dimensions that one can query per metric. That is only available via the SL Driver/APIs. The tradeoff is the SL Driver/APIs does not have access to the lineage of the entire dbt project (i.e how the dbt metrics dependencies on dbt models) - [We have three available integration points for the Semantic Layer API.](https://docs.getdbt.com/docs/dbt-cloud-apis/sl-api-overview) From e7a5cfcbf4497acd2ff82c03bbd9d0792835d571 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:51:36 -0500 Subject: [PATCH 054/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 8891f89cd82..bd95135bf2e 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -38,7 +38,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - [Link to documentation](https://docs.getdbt.com/docs/deploy/webhooks) - [Semantic Layers/Metrics](/docs/dbt-cloud-apis/sl-api-overview) - **Overview: Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](https://docs.getdbt.com/guides/dbt-ecosystem/sl-partner-integration-guide).** - - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is the Discovery API is not able to pull the semantic graph which provides the list of available dimensions that one can query per metric. That is only available via the SL Driver/APIs. The tradeoff is the SL Driver/APIs does not have access to the lineage of the entire dbt project (i.e how the dbt metrics dependencies on dbt models) + - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is that the Discovery API isn't able to pull the semantic graph, which provides the list of available dimensions that one can query per metric. That is only available with the SL Driver/APIs. The trade-off is that the SL Driver/APIs doesn't have access to the lineage of the entire dbt project (that is, how the dbt metrics dependencies on dbt models). - [We have three available integration points for the Semantic Layer API.](https://docs.getdbt.com/docs/dbt-cloud-apis/sl-api-overview) ## dbt Cloud Hosting and Authentication From 8c2b058c0815d11ba6db37883f0ed900757bd316 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:51:59 -0500 Subject: [PATCH 055/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index bd95135bf2e..2e03723719f 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -41,7 +41,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is that the Discovery API isn't able to pull the semantic graph, which provides the list of available dimensions that one can query per metric. That is only available with the SL Driver/APIs. The trade-off is that the SL Driver/APIs doesn't have access to the lineage of the entire dbt project (that is, how the dbt metrics dependencies on dbt models). - [We have three available integration points for the Semantic Layer API.](https://docs.getdbt.com/docs/dbt-cloud-apis/sl-api-overview) -## dbt Cloud Hosting and Authentication +## dbt Cloud hosting and authentication To use the dbt Cloud APIs, you will need access to the customer’s access urls. Depending on their dbt Cloud setup, they will have a different access url. To find out more, here is the [documentation](https://docs.getdbt.com/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own url to simplify support. From 294a6aefc70b741de2cae76477b473975c7b40f4 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:52:16 -0500 Subject: [PATCH 056/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 2e03723719f..9940ade8c69 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -37,7 +37,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - **Overview** — Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information about your dbt jobs in real time. - [Link to documentation](https://docs.getdbt.com/docs/deploy/webhooks) - [Semantic Layers/Metrics](/docs/dbt-cloud-apis/sl-api-overview) - - **Overview: Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](https://docs.getdbt.com/guides/dbt-ecosystem/sl-partner-integration-guide).** + - **Overview** — Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](/guides/dbt-ecosystem/sl-partner-integration-guide). - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is that the Discovery API isn't able to pull the semantic graph, which provides the list of available dimensions that one can query per metric. That is only available with the SL Driver/APIs. The trade-off is that the SL Driver/APIs doesn't have access to the lineage of the entire dbt project (that is, how the dbt metrics dependencies on dbt models). - [We have three available integration points for the Semantic Layer API.](https://docs.getdbt.com/docs/dbt-cloud-apis/sl-api-overview) From 27b085a7a5b428e860db900d9c7f2046ada82ac5 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:52:32 -0500 Subject: [PATCH 057/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 9940ade8c69..38c3e9b6d6c 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -39,7 +39,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - [Semantic Layers/Metrics](/docs/dbt-cloud-apis/sl-api-overview) - **Overview** — Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](/guides/dbt-ecosystem/sl-partner-integration-guide). - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is that the Discovery API isn't able to pull the semantic graph, which provides the list of available dimensions that one can query per metric. That is only available with the SL Driver/APIs. The trade-off is that the SL Driver/APIs doesn't have access to the lineage of the entire dbt project (that is, how the dbt metrics dependencies on dbt models). - - [We have three available integration points for the Semantic Layer API.](https://docs.getdbt.com/docs/dbt-cloud-apis/sl-api-overview) + - Three integration points are available for the Semantic Layer API. ## dbt Cloud hosting and authentication From efe6c44a886fe5efa3fcf6ea04d54fe920d529f2 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:52:52 -0500 Subject: [PATCH 058/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 38c3e9b6d6c..727c1187542 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -31,7 +31,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - [Discovery API (formerly referred to as Metadata API)](/docs/dbt-cloud-apis/discovery-api) - **Overview** — This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. -- [Administrative API (also referred to as the Admin API)](https://docs.getdbt.com/docs/dbt-cloud-apis/admin-cloud-api) +- [Administrative (Admin) API](/docs/dbt-cloud-apis/admin-cloud-api) - **Overview** — This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. - [Webhooks](/docs/deploy/webhooks) - **Overview** — Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information about your dbt jobs in real time. From 1b6cc262871831682f14464d5a66863305d466fa Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:55:55 -0500 Subject: [PATCH 059/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 727c1187542..1abe7b396d4 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -43,7 +43,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin ## dbt Cloud hosting and authentication -To use the dbt Cloud APIs, you will need access to the customer’s access urls. Depending on their dbt Cloud setup, they will have a different access url. To find out more, here is the [documentation](https://docs.getdbt.com/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own url to simplify support. +To use the dbt Cloud APIs, you'll need access to the customer’s access urls. Depending on their dbt Cloud setup, they'll have a different access URL. To find out more, refer to [Regions & IP addresses](/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own URL to simplify support. If the customer is on an Azure Single Tenant instance, they do not currently have access to the Discovery API or the Semantic Layer APIs. From 325c554b4f14eb1aaf7baff221d6e82cb2538e85 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:57:49 -0500 Subject: [PATCH 060/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 1abe7b396d4..9207dad4d55 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -45,7 +45,7 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin To use the dbt Cloud APIs, you'll need access to the customer’s access urls. Depending on their dbt Cloud setup, they'll have a different access URL. To find out more, refer to [Regions & IP addresses](/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own URL to simplify support. -If the customer is on an Azure Single Tenant instance, they do not currently have access to the Discovery API or the Semantic Layer APIs. +If the customer is on an Azure single tenant instance, they don't currently have access to the Discovery API or the Semantic Layer APIs. For authentication, we highly recommend that your integration uses account service tokens. You can read more about how to create a service token and what permission sets to provide it [here](https://docs.getdbt.com/docs/dbt-cloud-apis/service-tokens). Please note depending on their plan type, they will have access to different permission sets. We **do not** recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with. From 151e4da6d1707694e009fe69186174be927950d2 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:59:04 -0500 Subject: [PATCH 061/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 9207dad4d55..a01430e38b8 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -47,7 +47,7 @@ To use the dbt Cloud APIs, you'll need access to the customer’s access urls. D If the customer is on an Azure single tenant instance, they don't currently have access to the Discovery API or the Semantic Layer APIs. -For authentication, we highly recommend that your integration uses account service tokens. You can read more about how to create a service token and what permission sets to provide it [here](https://docs.getdbt.com/docs/dbt-cloud-apis/service-tokens). Please note depending on their plan type, they will have access to different permission sets. We **do not** recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with. +For authentication, we highly recommend that your integration uses account service tokens. You can read more about [how to create a service token and what permission sets to provide it](/docs/dbt-cloud-apis/service-tokens). Please note that depending on their plan type, they'll have access to different permission sets. We _do not_ recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with. ## Potential Use Cases From cec34b0fdf0d6a94e46fd78bf2e09a9fbc9304b6 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:59:18 -0500 Subject: [PATCH 062/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index a01430e38b8..1fac6ab730f 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -52,7 +52,7 @@ For authentication, we highly recommend that your integration uses account servi ## Potential Use Cases - Event-based orchestration - - **Desired Action:** You wish to receive information that a scheduled dbt Cloud Job has been completed or kick off a dbt Cloud job. You can align your product schedule to the dbt Cloud run schedule. + - **Desired action** — You want to receive information that a scheduled dbt Cloud job has been completed or has kicked off a dbt Cloud job. You can align your product schedule to the dbt Cloud run schedule. - **Examples:** Kicking off a dbt Job after the ETL job of extracting and loading the data is completed. Or receiving a webhook after the job has been completed to kick off your reverse ETL job. - **Integration Points:** Webhooks and/or Admin API - dbt Lineage From 71db8f23bade59a08184a0b6a9d43b6b2604b267 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:59:29 -0500 Subject: [PATCH 063/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 1fac6ab730f..d39876f5de0 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -54,7 +54,7 @@ For authentication, we highly recommend that your integration uses account servi - Event-based orchestration - **Desired action** — You want to receive information that a scheduled dbt Cloud job has been completed or has kicked off a dbt Cloud job. You can align your product schedule to the dbt Cloud run schedule. - **Examples:** Kicking off a dbt Job after the ETL job of extracting and loading the data is completed. Or receiving a webhook after the job has been completed to kick off your reverse ETL job. - - **Integration Points:** Webhooks and/or Admin API + - **Integration points** — Webhooks and/or Admin API - dbt Lineage - **Desired Action:** You wish to interpolate the dbt lineage metadata into your tool. - **Example: In your tool, you wish to pull in the dbt DAG into your lineage diagram. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-full-data-lineage)** From 1f17b84f1e98a47c4215343bbadfa324b488de80 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:59:41 -0500 Subject: [PATCH 064/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index d39876f5de0..e74ed030c19 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -55,7 +55,7 @@ For authentication, we highly recommend that your integration uses account servi - **Desired action** — You want to receive information that a scheduled dbt Cloud job has been completed or has kicked off a dbt Cloud job. You can align your product schedule to the dbt Cloud run schedule. - **Examples:** Kicking off a dbt Job after the ETL job of extracting and loading the data is completed. Or receiving a webhook after the job has been completed to kick off your reverse ETL job. - **Integration points** — Webhooks and/or Admin API -- dbt Lineage +- dbt lineage - **Desired Action:** You wish to interpolate the dbt lineage metadata into your tool. - **Example: In your tool, you wish to pull in the dbt DAG into your lineage diagram. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-full-data-lineage)** - **Integration Points:** Discovery API From f0638c430a7ea9a2150c5a765621397749ac05f9 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:59:53 -0500 Subject: [PATCH 065/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index e74ed030c19..b499373ca4f 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -56,7 +56,7 @@ For authentication, we highly recommend that your integration uses account servi - **Examples:** Kicking off a dbt Job after the ETL job of extracting and loading the data is completed. Or receiving a webhook after the job has been completed to kick off your reverse ETL job. - **Integration points** — Webhooks and/or Admin API - dbt lineage - - **Desired Action:** You wish to interpolate the dbt lineage metadata into your tool. + - **Desired action** — You want to interpolate the dbt lineage metadata into your tool. - **Example: In your tool, you wish to pull in the dbt DAG into your lineage diagram. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-full-data-lineage)** - **Integration Points:** Discovery API - dbt Environment/Job metadata From 50f4b3c800eb5d7c2c71783c53073c8299abccc7 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:00:09 -0500 Subject: [PATCH 066/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index b499373ca4f..bd9649866ae 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -53,7 +53,7 @@ For authentication, we highly recommend that your integration uses account servi - Event-based orchestration - **Desired action** — You want to receive information that a scheduled dbt Cloud job has been completed or has kicked off a dbt Cloud job. You can align your product schedule to the dbt Cloud run schedule. - - **Examples:** Kicking off a dbt Job after the ETL job of extracting and loading the data is completed. Or receiving a webhook after the job has been completed to kick off your reverse ETL job. + - **Examples** — Kicking off a dbt job after the ETL job of extracting and loading the data is completed. Or receiving a webhook after the job has been completed to kick off your reverse ETL job. - **Integration points** — Webhooks and/or Admin API - dbt lineage - **Desired action** — You want to interpolate the dbt lineage metadata into your tool. From f235d7278e78b39c9c2459a420f6f53381967c4e Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:00:22 -0500 Subject: [PATCH 067/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index bd9649866ae..50562816369 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -49,7 +49,7 @@ If the customer is on an Azure single tenant instance, they don't currently have For authentication, we highly recommend that your integration uses account service tokens. You can read more about [how to create a service token and what permission sets to provide it](/docs/dbt-cloud-apis/service-tokens). Please note that depending on their plan type, they'll have access to different permission sets. We _do not_ recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with. -## Potential Use Cases +## Potential use cases - Event-based orchestration - **Desired action** — You want to receive information that a scheduled dbt Cloud job has been completed or has kicked off a dbt Cloud job. You can align your product schedule to the dbt Cloud run schedule. From d9a9b6c2db809a8c168c30288caed3b8947ca9a4 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:00:52 -0500 Subject: [PATCH 068/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 1 - 1 file changed, 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 50562816369..2be52eee2dc 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -35,7 +35,6 @@ If you require a partner dbt Cloud account to test on, we can upgrade an existin - **Overview** — This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. - [Webhooks](/docs/deploy/webhooks) - **Overview** — Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information about your dbt jobs in real time. - - [Link to documentation](https://docs.getdbt.com/docs/deploy/webhooks) - [Semantic Layers/Metrics](/docs/dbt-cloud-apis/sl-api-overview) - **Overview** — Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](/guides/dbt-ecosystem/sl-partner-integration-guide). - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is that the Discovery API isn't able to pull the semantic graph, which provides the list of available dimensions that one can query per metric. That is only available with the SL Driver/APIs. The trade-off is that the SL Driver/APIs doesn't have access to the lineage of the entire dbt project (that is, how the dbt metrics dependencies on dbt models). From e64fede56647042d7dd0e1d3b57a102ad56da35c Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:01:22 -0500 Subject: [PATCH 069/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 2be52eee2dc..5440a5d9f23 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -56,7 +56,7 @@ For authentication, we highly recommend that your integration uses account servi - **Integration points** — Webhooks and/or Admin API - dbt lineage - **Desired action** — You want to interpolate the dbt lineage metadata into your tool. - - **Example: In your tool, you wish to pull in the dbt DAG into your lineage diagram. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-full-data-lineage)** + - **Example** — In your tool, you want to pull in the dbt DAG into your lineage diagram. For details on what you could pull and how to do this, refer to [Use cases and examples for the Discovery API](/docs/dbt-cloud-apis/discovery-use-cases-and-examples). - **Integration Points:** Discovery API - dbt Environment/Job metadata - **Desired Action:** You wish to interpolate dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. From c6bae5ec52d0e38a0611e9bde3fa921845f2bc2c Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:01:33 -0500 Subject: [PATCH 070/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 5440a5d9f23..2ac518fda6e 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -58,7 +58,7 @@ For authentication, we highly recommend that your integration uses account servi - **Desired action** — You want to interpolate the dbt lineage metadata into your tool. - **Example** — In your tool, you want to pull in the dbt DAG into your lineage diagram. For details on what you could pull and how to do this, refer to [Use cases and examples for the Discovery API](/docs/dbt-cloud-apis/discovery-use-cases-and-examples). - **Integration Points:** Discovery API -- dbt Environment/Job metadata +- dbt environment/job metadata - **Desired Action:** You wish to interpolate dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. - **Example:** In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model) - **Integration Points:** Discovery API From ff218889307cb820046359b0b27b70f06303545e Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:01:56 -0500 Subject: [PATCH 071/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 2ac518fda6e..d89d32381bf 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -57,7 +57,7 @@ For authentication, we highly recommend that your integration uses account servi - dbt lineage - **Desired action** — You want to interpolate the dbt lineage metadata into your tool. - **Example** — In your tool, you want to pull in the dbt DAG into your lineage diagram. For details on what you could pull and how to do this, refer to [Use cases and examples for the Discovery API](/docs/dbt-cloud-apis/discovery-use-cases-and-examples). - - **Integration Points:** Discovery API + - **Integration points** — Discovery API - dbt environment/job metadata - **Desired Action:** You wish to interpolate dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. - **Example:** In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model) From 6af40a3884fce985487fdca5642b2c346178aaef Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:07:02 -0500 Subject: [PATCH 072/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index d89d32381bf..b938dca050c 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -93,7 +93,7 @@ For authentication, we highly recommend that your integration uses account servi - Contact your Partner Manager with your account id (in your URL) - Why should I not use the Admin API to pull out the dbt artifacts for metadata? - We recommend not integrating with the Admin API to extract the dbt artifacts documentation. This is because the Discovery API provides more extensive information, a user-friendly structure and more reliable integration point. -- How do I get access to the dbt Brand assets? +- How do I get access to the dbt brand assets? - Check out this [page](https://www.getdbt.com/brand-guidelines/). Please make sure you’re not using our old logo(hint: there should only be one hole in the logo). Please also note that the name dbt and the dbt logo are trademarked by dbt Labs, and that use is governed by our brand guidelines - which are fairly specific for commercial uses. If you have any questions about proper use of our marks, please ask for your partner manager. - How do I engage with the partnerships team? - Email partnerships@dbtlabs.com. \ No newline at end of file From 3c91a599841f2e7c1bf8efdd3ba9ba5e05a408c2 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:07:10 -0500 Subject: [PATCH 073/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index b938dca050c..27541fa232b 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -94,6 +94,6 @@ For authentication, we highly recommend that your integration uses account servi - Why should I not use the Admin API to pull out the dbt artifacts for metadata? - We recommend not integrating with the Admin API to extract the dbt artifacts documentation. This is because the Discovery API provides more extensive information, a user-friendly structure and more reliable integration point. - How do I get access to the dbt brand assets? - - Check out this [page](https://www.getdbt.com/brand-guidelines/). Please make sure you’re not using our old logo(hint: there should only be one hole in the logo). Please also note that the name dbt and the dbt logo are trademarked by dbt Labs, and that use is governed by our brand guidelines - which are fairly specific for commercial uses. If you have any questions about proper use of our marks, please ask for your partner manager. + - Check out our [Brand guidelines](https://www.getdbt.com/brand-guidelines/) page. Please make sure you’re not using our old logo (hint: there should only be one hole in the logo). Please also note that the name dbt and the dbt logo are trademarked by dbt Labs, and that use is governed by our brand guidelines, which are fairly specific for commercial uses. If you have any questions about proper use of our marks, please ask your partner manager. - How do I engage with the partnerships team? - Email partnerships@dbtlabs.com. \ No newline at end of file From b8d54492535a21217827ae763b38ed6a341a52ee Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:07:17 -0500 Subject: [PATCH 074/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 27541fa232b..e28b9ccf9d1 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -92,7 +92,7 @@ For authentication, we highly recommend that your integration uses account servi - How do I get a Partner Account? - Contact your Partner Manager with your account id (in your URL) - Why should I not use the Admin API to pull out the dbt artifacts for metadata? - - We recommend not integrating with the Admin API to extract the dbt artifacts documentation. This is because the Discovery API provides more extensive information, a user-friendly structure and more reliable integration point. + - We recommend not integrating with the Admin API to extract the dbt artifacts documentation. This is because the Discovery API provides more extensive information, a user-friendly structure, and a more reliable integration point. - How do I get access to the dbt brand assets? - Check out our [Brand guidelines](https://www.getdbt.com/brand-guidelines/) page. Please make sure you’re not using our old logo (hint: there should only be one hole in the logo). Please also note that the name dbt and the dbt logo are trademarked by dbt Labs, and that use is governed by our brand guidelines, which are fairly specific for commercial uses. If you have any questions about proper use of our marks, please ask your partner manager. - How do I engage with the partnerships team? From 298cd464fb60f6a93bc5525dd47a9e73c67125be Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:07:22 -0500 Subject: [PATCH 075/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index e28b9ccf9d1..fd269137235 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -90,7 +90,7 @@ For authentication, we highly recommend that your integration uses account servi - Can you provide column-level information in the lineage? - Column-level lineage is currently in beta release with more information to come. - How do I get a Partner Account? - - Contact your Partner Manager with your account id (in your URL) + - Contact your Partner Manager with your account ID (in your URL). - Why should I not use the Admin API to pull out the dbt artifacts for metadata? - We recommend not integrating with the Admin API to extract the dbt artifacts documentation. This is because the Discovery API provides more extensive information, a user-friendly structure, and a more reliable integration point. - How do I get access to the dbt brand assets? From 1635cecbb0c2dc1cc4011896cee03aa3d2cd0a6b Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:07:27 -0500 Subject: [PATCH 076/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index fd269137235..7c4cdef78c5 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -91,7 +91,7 @@ For authentication, we highly recommend that your integration uses account servi - Column-level lineage is currently in beta release with more information to come. - How do I get a Partner Account? - Contact your Partner Manager with your account ID (in your URL). -- Why should I not use the Admin API to pull out the dbt artifacts for metadata? +- Why shouldn't I use the Admin API to pull out the dbt artifacts for metadata? - We recommend not integrating with the Admin API to extract the dbt artifacts documentation. This is because the Discovery API provides more extensive information, a user-friendly structure, and a more reliable integration point. - How do I get access to the dbt brand assets? - Check out our [Brand guidelines](https://www.getdbt.com/brand-guidelines/) page. Please make sure you’re not using our old logo (hint: there should only be one hole in the logo). Please also note that the name dbt and the dbt logo are trademarked by dbt Labs, and that use is governed by our brand guidelines, which are fairly specific for commercial uses. If you have any questions about proper use of our marks, please ask your partner manager. From 2f2666897b966fcd909d6ccabe6ae1f99260c391 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:07:43 -0500 Subject: [PATCH 077/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 7c4cdef78c5..9869ffd5bda 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -60,7 +60,7 @@ For authentication, we highly recommend that your integration uses account servi - **Integration points** — Discovery API - dbt environment/job metadata - **Desired Action:** You wish to interpolate dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. - - **Example:** In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model) + - **Example** — In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. For details on what you could pull and how to do this, refer to [What's the latest state of each model](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model). - **Integration Points:** Discovery API - dbt Model Documentation - **Desired Action:** You wish to interpolate dbt Project Information, including model descriptions, column descriptions, etc. From eefe9e0052890abf23e561212020cfdd4df457f2 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:07:50 -0500 Subject: [PATCH 078/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 9869ffd5bda..ec6a6e7c365 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -61,7 +61,7 @@ For authentication, we highly recommend that your integration uses account servi - dbt environment/job metadata - **Desired Action:** You wish to interpolate dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. - **Example** — In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. For details on what you could pull and how to do this, refer to [What's the latest state of each model](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model). - - **Integration Points:** Discovery API + - **Integration points** — Discovery API - dbt Model Documentation - **Desired Action:** You wish to interpolate dbt Project Information, including model descriptions, column descriptions, etc. - **Example:** You want to extract out the dbt model description so that you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean) From c521160cee9e779f8aec9028db7c24a652fb955d Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:07:57 -0500 Subject: [PATCH 079/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index ec6a6e7c365..ff39616b5fb 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -59,7 +59,7 @@ For authentication, we highly recommend that your integration uses account servi - **Example** — In your tool, you want to pull in the dbt DAG into your lineage diagram. For details on what you could pull and how to do this, refer to [Use cases and examples for the Discovery API](/docs/dbt-cloud-apis/discovery-use-cases-and-examples). - **Integration points** — Discovery API - dbt environment/job metadata - - **Desired Action:** You wish to interpolate dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. + - **Desired action** — You want to interpolate the dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. - **Example** — In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. For details on what you could pull and how to do this, refer to [What's the latest state of each model](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model). - **Integration points** — Discovery API - dbt Model Documentation From ed73c02048c5b075c99338257573551be369be0d Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:08:03 -0500 Subject: [PATCH 080/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index ff39616b5fb..9fe53141dc5 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -63,7 +63,7 @@ For authentication, we highly recommend that your integration uses account servi - **Example** — In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. For details on what you could pull and how to do this, refer to [What's the latest state of each model](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model). - **Integration points** — Discovery API - dbt Model Documentation - - **Desired Action:** You wish to interpolate dbt Project Information, including model descriptions, column descriptions, etc. + - **Desired action** — You want to interpolate the dbt project Information, including model descriptions, column descriptions, etc. - **Example:** You want to extract out the dbt model description so that you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean) - **Integration Points:** Discovery API From f9d6b1e611cb6f33b4a73af9092c60b71ea71afb Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:08:14 -0500 Subject: [PATCH 081/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 9fe53141dc5..2b482eca245 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -67,7 +67,7 @@ For authentication, we highly recommend that your integration uses account servi - **Example:** You want to extract out the dbt model description so that you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean) - **Integration Points:** Discovery API -**dbt Core only users will have no access to the above integration points.** For dbt metadata, oftentimes our partners will create a dbt core integration by using the [dbt artifacts](https://www.getdbt.com/product/semantic-layer/) files generated by each run and provided by the user. With our Discovery API, we are providing a dynamic way to get the latest up to date information, parsed out for you. +dbt Core only users will have no access to the above integration points. For dbt metadata, oftentimes our partners will create a dbt Core integration by using the [dbt artifact](https://www.getdbt.com/product/semantic-layer/) files generated by each run and provided by the user. With the Discovery API, we are providing a dynamic way to get the latest information parsed out for you. ## dbt Cloud Plans & Permissions From fbc844d651b710c4376795a13340ce3f5ddafdda Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:08:20 -0500 Subject: [PATCH 082/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 2b482eca245..ab03edd69f7 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -73,7 +73,7 @@ dbt Core only users will have no access to the above integration points. For dbt [The dbt Cloud plan type](https://www.getdbt.com/pricing) will change what the user has access to. There are four different types of plans: -- **Developer**: This is free and available to one user with a limited amount of successful models built. This plan cannot access the APIs, Webhooks, or Semantic Layer. Limited to 1 project. +- **Developer** — This is free and available to one user with a limited amount of successful models built. This plan can't access the APIs, Webhooks, or Semantic Layer and is limited to just one project. - **Team:** This plan has access to the APIs, Webhooks, and Semantic Layer. You may have up to 8 users on the account and one dbt Cloud Project. This is limited to 15,000 successful models built. - **Enterprise** (Multi-tenant/Multi-cell): This plan has access to the APIs, Webhooks, and Semantic Layer. They may have more than one dbt Cloud Project based on how many dbt projects/domains they have using dbt. Majority of our enterprise customers are on multi-tenant dbt Cloud instances. - **Enterprise** (Single-tenant): This plan may have access to the APIs, Webhooks, and Semantic Layer. If you are working with a specific customer, let us know, and we can confirm if their instance has access. From f2ccece9f687926159d0a4797b9fe905095ead58 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:08:35 -0500 Subject: [PATCH 083/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index ab03edd69f7..dd5ae6bad35 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -69,7 +69,7 @@ For authentication, we highly recommend that your integration uses account servi dbt Core only users will have no access to the above integration points. For dbt metadata, oftentimes our partners will create a dbt Core integration by using the [dbt artifact](https://www.getdbt.com/product/semantic-layer/) files generated by each run and provided by the user. With the Discovery API, we are providing a dynamic way to get the latest information parsed out for you. -## dbt Cloud Plans & Permissions +## dbt Cloud plans & permissions [The dbt Cloud plan type](https://www.getdbt.com/pricing) will change what the user has access to. There are four different types of plans: From dbb3924f965f4f5343d9a70e1ce55f76e59b6f1b Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:08:42 -0500 Subject: [PATCH 084/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index dd5ae6bad35..666ecd53f8d 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -75,7 +75,7 @@ dbt Core only users will have no access to the above integration points. For dbt - **Developer** — This is free and available to one user with a limited amount of successful models built. This plan can't access the APIs, Webhooks, or Semantic Layer and is limited to just one project. - **Team:** This plan has access to the APIs, Webhooks, and Semantic Layer. You may have up to 8 users on the account and one dbt Cloud Project. This is limited to 15,000 successful models built. -- **Enterprise** (Multi-tenant/Multi-cell): This plan has access to the APIs, Webhooks, and Semantic Layer. They may have more than one dbt Cloud Project based on how many dbt projects/domains they have using dbt. Majority of our enterprise customers are on multi-tenant dbt Cloud instances. +- **Enterprise** (multi-tenant/multi-cell) — This plan provides access to the APIs, webhooks, and Semantic Layer. You can have more than one dbt Cloud project based on how many dbt projects/domains they have using dbt. The majority of our enterprise customers are on multi-tenant dbt Cloud instances. - **Enterprise** (Single-tenant): This plan may have access to the APIs, Webhooks, and Semantic Layer. If you are working with a specific customer, let us know, and we can confirm if their instance has access. ## Frequently Asked Questions From 4358a4d752b06c8ec8bdebb9545fece1934101fc Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:08:49 -0500 Subject: [PATCH 085/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 666ecd53f8d..fd1c4a9a6f8 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -80,7 +80,7 @@ dbt Core only users will have no access to the above integration points. For dbt ## Frequently Asked Questions -- What is a dbt Cloud Project? +- What is a dbt Cloud project? - A dbt Cloud project is made up of two connections: one to the git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud Project in their account but there are enterprise clients who might have more depending on their use cases.The project also encapsulates two types of environments at minimal: a development environment and deployment environment. - Oftentimes folks refer to the [dbt Project](https://docs.getdbt.com/docs/build/projects) as the code hosted in their git repository. - What is a dbt Cloud Environment? From 5748cbe3d11f7c0129761c35b98bbeafa4f053d6 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:09:18 -0500 Subject: [PATCH 086/204] Update website/blog/2023-12-20-partner-integration-guide.md --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index fd1c4a9a6f8..561d030385f 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -78,7 +78,7 @@ dbt Core only users will have no access to the above integration points. For dbt - **Enterprise** (multi-tenant/multi-cell) — This plan provides access to the APIs, webhooks, and Semantic Layer. You can have more than one dbt Cloud project based on how many dbt projects/domains they have using dbt. The majority of our enterprise customers are on multi-tenant dbt Cloud instances. - **Enterprise** (Single-tenant): This plan may have access to the APIs, Webhooks, and Semantic Layer. If you are working with a specific customer, let us know, and we can confirm if their instance has access. -## Frequently Asked Questions +## FAQs - What is a dbt Cloud project? - A dbt Cloud project is made up of two connections: one to the git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud Project in their account but there are enterprise clients who might have more depending on their use cases.The project also encapsulates two types of environments at minimal: a development environment and deployment environment. From c64ddc589fb98bd97cb8f756a8c81f6c2dabf93a Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:09:27 -0500 Subject: [PATCH 087/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 561d030385f..09b488a1b77 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -81,7 +81,7 @@ dbt Core only users will have no access to the above integration points. For dbt ## FAQs - What is a dbt Cloud project? - - A dbt Cloud project is made up of two connections: one to the git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud Project in their account but there are enterprise clients who might have more depending on their use cases.The project also encapsulates two types of environments at minimal: a development environment and deployment environment. + - A dbt Cloud project is made up of two connections: one to the Git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud project in their account but there are enterprise clients who might have more depending on their use cases. The project also encapsulates two types of environments at minimal: a development environment and deployment environment. - Oftentimes folks refer to the [dbt Project](https://docs.getdbt.com/docs/build/projects) as the code hosted in their git repository. - What is a dbt Cloud Environment? - [For an overview, check out this documentation.](https://docs.getdbt.com/docs/environments-in-dbt) At minimal an project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI. From 3cf7408d6850b5d5ef76fd982dd89863d22f456e Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:09:33 -0500 Subject: [PATCH 088/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 09b488a1b77..116b966ab26 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -83,7 +83,7 @@ dbt Core only users will have no access to the above integration points. For dbt - What is a dbt Cloud project? - A dbt Cloud project is made up of two connections: one to the Git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud project in their account but there are enterprise clients who might have more depending on their use cases. The project also encapsulates two types of environments at minimal: a development environment and deployment environment. - Oftentimes folks refer to the [dbt Project](https://docs.getdbt.com/docs/build/projects) as the code hosted in their git repository. -- What is a dbt Cloud Environment? +- What is a dbt Cloud environment? - [For an overview, check out this documentation.](https://docs.getdbt.com/docs/environments-in-dbt) At minimal an project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI. - Can we write back to the dbt project? - At this moment, we do not have a Write API. A dbt project is hosted in a git repository, so if you have a git provider integration, you can manually open up a Pull Request on the project to maintain the version control process. From 1b766c9071a8bb40a7e0d316fe8a867c4b14c308 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:09:41 -0500 Subject: [PATCH 089/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 116b966ab26..2dd7b6765e1 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -86,7 +86,7 @@ dbt Core only users will have no access to the above integration points. For dbt - What is a dbt Cloud environment? - [For an overview, check out this documentation.](https://docs.getdbt.com/docs/environments-in-dbt) At minimal an project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI. - Can we write back to the dbt project? - - At this moment, we do not have a Write API. A dbt project is hosted in a git repository, so if you have a git provider integration, you can manually open up a Pull Request on the project to maintain the version control process. + - At this moment, we don't have a Write API. A dbt project is hosted in a Git repository, so if you have a Git provider integration, you can manually open a pull request (PR) on the project to maintain the version control process. - Can you provide column-level information in the lineage? - Column-level lineage is currently in beta release with more information to come. - How do I get a Partner Account? From ea0ae3caab2db33a7bcb2a0a4bbcb6f980b7f28d Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:09:48 -0500 Subject: [PATCH 090/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 2dd7b6765e1..1e9e41c4993 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -84,7 +84,7 @@ dbt Core only users will have no access to the above integration points. For dbt - A dbt Cloud project is made up of two connections: one to the Git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud project in their account but there are enterprise clients who might have more depending on their use cases. The project also encapsulates two types of environments at minimal: a development environment and deployment environment. - Oftentimes folks refer to the [dbt Project](https://docs.getdbt.com/docs/build/projects) as the code hosted in their git repository. - What is a dbt Cloud environment? - - [For an overview, check out this documentation.](https://docs.getdbt.com/docs/environments-in-dbt) At minimal an project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI. + - For an overview, check out [About environments](https://docs.getdbt.com/docs/environments-in-dbt). At a minimum, a project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI. - Can we write back to the dbt project? - At this moment, we don't have a Write API. A dbt project is hosted in a Git repository, so if you have a Git provider integration, you can manually open a pull request (PR) on the project to maintain the version control process. - Can you provide column-level information in the lineage? From 872c676f423ebb9338c8f4839059878172e5e1ec Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:09:56 -0500 Subject: [PATCH 091/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 1e9e41c4993..a145bd71744 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -62,7 +62,7 @@ For authentication, we highly recommend that your integration uses account servi - **Desired action** — You want to interpolate the dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. - **Example** — In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. For details on what you could pull and how to do this, refer to [What's the latest state of each model](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model). - **Integration points** — Discovery API -- dbt Model Documentation +- dbt model documentation - **Desired action** — You want to interpolate the dbt project Information, including model descriptions, column descriptions, etc. - **Example:** You want to extract out the dbt model description so that you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean) - **Integration Points:** Discovery API From f177a6dc7ff8624dde7e68fdf8960203651b9e58 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:10:01 -0500 Subject: [PATCH 092/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index a145bd71744..859edcd0a3f 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -65,7 +65,7 @@ For authentication, we highly recommend that your integration uses account servi - dbt model documentation - **Desired action** — You want to interpolate the dbt project Information, including model descriptions, column descriptions, etc. - **Example:** You want to extract out the dbt model description so that you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean) - - **Integration Points:** Discovery API + - **Integration points** — Discovery API dbt Core only users will have no access to the above integration points. For dbt metadata, oftentimes our partners will create a dbt Core integration by using the [dbt artifact](https://www.getdbt.com/product/semantic-layer/) files generated by each run and provided by the user. With the Discovery API, we are providing a dynamic way to get the latest information parsed out for you. From 57ac462015dc064b320ba4b96466b85a3d82441a Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:10:11 -0500 Subject: [PATCH 093/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 859edcd0a3f..8823e90b8b7 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -64,7 +64,7 @@ For authentication, we highly recommend that your integration uses account servi - **Integration points** — Discovery API - dbt model documentation - **Desired action** — You want to interpolate the dbt project Information, including model descriptions, column descriptions, etc. - - **Example:** You want to extract out the dbt model description so that you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. [This is what you could pull and how to do this.](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean) + - **Example** — You want to extract the dbt model description so you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. For details on what you could pull and how to do this, refer to [What does this dataset and its columns mean](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean). - **Integration points** — Discovery API dbt Core only users will have no access to the above integration points. For dbt metadata, oftentimes our partners will create a dbt Core integration by using the [dbt artifact](https://www.getdbt.com/product/semantic-layer/) files generated by each run and provided by the user. With the Discovery API, we are providing a dynamic way to get the latest information parsed out for you. From 715394bf4a08b039d62baff45f2f592aaf72bb10 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:10:16 -0500 Subject: [PATCH 094/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 8823e90b8b7..8b198743dbc 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -74,7 +74,7 @@ dbt Core only users will have no access to the above integration points. For dbt [The dbt Cloud plan type](https://www.getdbt.com/pricing) will change what the user has access to. There are four different types of plans: - **Developer** — This is free and available to one user with a limited amount of successful models built. This plan can't access the APIs, Webhooks, or Semantic Layer and is limited to just one project. -- **Team:** This plan has access to the APIs, Webhooks, and Semantic Layer. You may have up to 8 users on the account and one dbt Cloud Project. This is limited to 15,000 successful models built. +- **Team** — This plan provides access to the APIs, webhooks, and Semantic Layer. You can have up to eight users on the account and one dbt Cloud Project. This is limited to 15,000 successful models built. - **Enterprise** (multi-tenant/multi-cell) — This plan provides access to the APIs, webhooks, and Semantic Layer. You can have more than one dbt Cloud project based on how many dbt projects/domains they have using dbt. The majority of our enterprise customers are on multi-tenant dbt Cloud instances. - **Enterprise** (Single-tenant): This plan may have access to the APIs, Webhooks, and Semantic Layer. If you are working with a specific customer, let us know, and we can confirm if their instance has access. From 68f3a68afbfc7d9887da3a07c856387a31141c4d Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:10:21 -0500 Subject: [PATCH 095/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 8b198743dbc..1f9d587b0f1 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -76,7 +76,7 @@ dbt Core only users will have no access to the above integration points. For dbt - **Developer** — This is free and available to one user with a limited amount of successful models built. This plan can't access the APIs, Webhooks, or Semantic Layer and is limited to just one project. - **Team** — This plan provides access to the APIs, webhooks, and Semantic Layer. You can have up to eight users on the account and one dbt Cloud Project. This is limited to 15,000 successful models built. - **Enterprise** (multi-tenant/multi-cell) — This plan provides access to the APIs, webhooks, and Semantic Layer. You can have more than one dbt Cloud project based on how many dbt projects/domains they have using dbt. The majority of our enterprise customers are on multi-tenant dbt Cloud instances. -- **Enterprise** (Single-tenant): This plan may have access to the APIs, Webhooks, and Semantic Layer. If you are working with a specific customer, let us know, and we can confirm if their instance has access. +- **Enterprise** (single tenant): This plan might have access to the APIs, webhooks, and Semantic Layer. If you're working with a specific customer, let us know and we can confirm if their instance has access. ## FAQs From 509f5a8a4aac7b7a924ad1736de2b4dc34d890f7 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Wed, 20 Dec 2023 16:10:26 -0500 Subject: [PATCH 096/204] Update website/blog/2023-12-20-partner-integration-guide.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 1f9d587b0f1..22fbbbafbb7 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -82,7 +82,7 @@ dbt Core only users will have no access to the above integration points. For dbt - What is a dbt Cloud project? - A dbt Cloud project is made up of two connections: one to the Git repository and one to the data warehouse/platform. Most customers will have only one dbt Cloud project in their account but there are enterprise clients who might have more depending on their use cases. The project also encapsulates two types of environments at minimal: a development environment and deployment environment. - - Oftentimes folks refer to the [dbt Project](https://docs.getdbt.com/docs/build/projects) as the code hosted in their git repository. + - Folks commonly refer to the [dbt project](https://docs.getdbt.com/docs/build/projects) as the code hosted in their Git repository. - What is a dbt Cloud environment? - For an overview, check out [About environments](https://docs.getdbt.com/docs/environments-in-dbt). At a minimum, a project will have one deployment type environment that they will be executing jobs on. The development environment powers the dbt Cloud IDE and Cloud CLI. - Can we write back to the dbt project? From 3f8d7c705f823a1ecfffb7f096da6c362bc01524 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 16:20:39 -0500 Subject: [PATCH 097/204] add rn --- .../dbt-versions/release-notes/79-July-2023/faster-run.md | 8 +++++++- website/docs/docs/deploy/job-scheduler.md | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index ba82234c0b5..21f301299ae 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -29,6 +29,12 @@ Our enhanced scheduler offers more durability and empowers users to run jobs eff This means Enterprise, multi-tenant accounts can now enjoy the advantages of unlimited job concurrency. Previously limited to a fixed number of run slots, Enterprise accounts now have the freedom to operate without constraints. Single-tenant support will be coming soon. Team plan customers will continue to have only 2 run slots. -Something to note, each running job occupies a run slot for its duration, and if all slots are occupied, jobs will queue accordingly. +Something to note, each running job occupies a run slot for its duration, and if all slots are occupied, jobs will queue accordingly. For more feature details, refer to the [dbt Cloud pricing page](https://www.getdbt.com/pricing/). + +- **Update December 2023: New Team plans with unlimited job concurrency**
+ We've introduced a change to our dbt Cloud Scheduler for newly created Team plan accounts:

+ - Unlimited Job concurrency for new Team plans — New accounts on the Team plan now benefit from unlimited job concurrency. + - Existing Team plans — It's important to note that existing Team plan accounts will continue to operate with their original fixed number of run slots. + - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our Enterprise plan, which provides unlimited job concurrency and project capacity, is an ideal upgrade. Refer to our [Enterprise plan page](https://www.getdbt.com/pricing/) for more details. diff --git a/website/docs/docs/deploy/job-scheduler.md b/website/docs/docs/deploy/job-scheduler.md index 1ace16f5ff5..df9cb09413e 100644 --- a/website/docs/docs/deploy/job-scheduler.md +++ b/website/docs/docs/deploy/job-scheduler.md @@ -31,7 +31,7 @@ Familiarize yourself with these useful terms to help you understand how the job | Over-scheduled job | A situation when a cron-scheduled job's run duration becomes longer than the frequency of the job’s schedule, resulting in a job queue that will grow faster than the scheduler can process the job’s runs. | | Prep time | The time dbt Cloud takes to create a short-lived environment to execute the job commands in the user's cloud data platform. Prep time varies most significantly at the top of the hour when the dbt Cloud Scheduler experiences a lot of run traffic. | | Run | A single, unique execution of a dbt job. | -| Run slot | Run slots control the number of jobs that can run concurrently. Developer plan has a fixed number of run slots, and Enterprise & Team users have [unlimited run slots](/docs/dbt-versions/release-notes/July-2023/faster-run#unlimited-job-concurrency-for-enterprise-accounts). Each running job occupies a run slot for the duration of the run. If you need more jobs to execute in parallel, consider the [Enterprise plan](https://www.getdbt.com/pricing/) | +| Run slot | Run slots control the number of jobs that can run concurrently. Developer plan has a fixed number of run slots, while Enterprise and Team users have [unlimited run slots](/docs/dbt-versions/release-notes/July-2023/faster-run#unlimited-job-concurrency-for-enterprise-accounts). Each running job occupies a run slot for the duration of the run.

Team and Developer plans include only one project each. For additional projects, consider upgrading to the [Enterprise plan](https://www.getdbt.com/pricing/).| | Threads | When dbt builds a project's DAG, it tries to parallelize the execution by using threads. The [thread](/docs/running-a-dbt-project/using-threads) count is the maximum number of paths through the DAG that dbt can work on simultaneously. The default thread count in a job is 4. | | Wait time | Amount of time that dbt Cloud waits before running a job, either because there are no available slots or because a previous run of the same job is still in progress. | From 6732df021c4335faa35ba6cb106b4e6123ba7077 Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Wed, 20 Dec 2023 13:22:55 -0800 Subject: [PATCH 098/204] Update repo caching --- website/snippets/_cloud-environments-info.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 6e096b83750..6400b29ea9f 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -42,6 +42,12 @@ For improved reliability and performance on your job runs, you can enable dbt Cl dbt Cloud caches your project's Git repo after each successful run and retains it for 8 days if there are no repo updates. It caches all packages regardless of installation method and does not fetch code outside of the job runs. +Below lists the situations when dbt Cloud uses the cached copy: + +- Git authentication fails. +- There are syntax errors in the `packages.yml` file. To catch these errors sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). +- A package is incompatible with the dbt version being used. To catch this incompatibility sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). + To enable Git repository caching, select **Account settings** from the gear menu and enable the **Repository caching** option. From 6cbf9963b7c23563943a1d09c23b20419d842897 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 16:31:53 -0500 Subject: [PATCH 099/204] small tweak --- .../docs/dbt-versions/release-notes/79-July-2023/faster-run.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index 21f301299ae..02291af59df 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -37,4 +37,4 @@ For more feature details, refer to the [dbt Cloud pricing page](https://www.getd We've introduced a change to our dbt Cloud Scheduler for newly created Team plan accounts:

- Unlimited Job concurrency for new Team plans — New accounts on the Team plan now benefit from unlimited job concurrency. - Existing Team plans — It's important to note that existing Team plan accounts will continue to operate with their original fixed number of run slots. - - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our Enterprise plan, which provides unlimited job concurrency and project capacity, is an ideal upgrade. Refer to our [Enterprise plan page](https://www.getdbt.com/pricing/) for more details. + - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. From bf032954f4ef62b794fb216809342e65d61a4ade Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 20 Dec 2023 16:41:15 -0500 Subject: [PATCH 100/204] simplify --- .../docs/dbt-versions/release-notes/79-July-2023/faster-run.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index 02291af59df..69ab76c6050 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -36,5 +36,5 @@ For more feature details, refer to the [dbt Cloud pricing page](https://www.getd - **Update December 2023: New Team plans with unlimited job concurrency**
We've introduced a change to our dbt Cloud Scheduler for newly created Team plan accounts:

- Unlimited Job concurrency for new Team plans — New accounts on the Team plan now benefit from unlimited job concurrency. - - Existing Team plans — It's important to note that existing Team plan accounts will continue to operate with their original fixed number of run slots. + - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. From d459a08ed34f042bc4d4962eeb2f075be7408d0a Mon Sep 17 00:00:00 2001 From: Amy Chen Date: Wed, 20 Dec 2023 17:27:40 -0500 Subject: [PATCH 101/204] fix links --- .../2023-12-20-partner-integration-guide.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 22fbbbafbb7..1c1ea8f893c 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -15,7 +15,7 @@ is_featured: false Over the course of my three years running the Partner Engineering team at dbt Labs, the most common question I've been asked is, How do we integrate with dbt? Because those conversations often start out at the same place, I decided to create this guide so I’m no longer the blocker to fundamental information. This also allows us to skip the intro and get to the fun conversations so much faster, like what a joint solution for our customers would look like. -This guide doesn't include how to integrate with dbt Core. If you’re interested in creating a dbt adapter, please check out the [adapter development guide](/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) instead. +This guide doesn't include how to integrate with dbt Core. If you’re interested in creating a dbt adapter, please check out the [adapter development guide](https://docs.getdbt.com/guides/dbt-ecosystem/adapter-development/1-what-are-adapters) instead. Instead, we're going to focus on integrating with dbt Cloud. Integrating with dbt Cloud is a key requirement to become a dbt Labs technology partner, opening the door to a variety of collaborative commercial opportunities. @@ -23,30 +23,30 @@ Here I'll cover how to get started, potential use cases you want to solve for, a ## New to dbt Cloud? -If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](/guides) after reading [What is dbt?](/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. +If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](https://docs.getdbt.com/guides) after reading [What is dbt?](https://docs.getdbt.com/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. This account may only be used for development, training, and demonstration purposes. Please contact your partner manager if you're interested and provide the account ID (provided in the URL). Our partner account includes all of the enterprise level functionality and can be provided with a signed partnerships agreement. ## Integration points -- [Discovery API (formerly referred to as Metadata API)](/docs/dbt-cloud-apis/discovery-api) +- [Discovery API (formerly referred to as Metadata API)](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-api) - **Overview** — This GraphQL API allows you to query the metadata that dbt Cloud generates every time you run a dbt project. We have two schemas available (environment and job level). By default, we always recommend that you integrate with the environment level schema because it contains the latest state and historical run results of all the jobs run on the dbt Cloud project. The job level will only provide you the metadata of one job, giving you only a small snapshot of part of the project. -- [Administrative (Admin) API](/docs/dbt-cloud-apis/admin-cloud-api) +- [Administrative (Admin) API](https://docs.getdbt.com/docs/dbt-cloud-apis/admin-cloud-api) - **Overview** — This REST API allows you to orchestrate dbt Cloud jobs runs and help you administer a dbt Cloud account. For metadata retrieval, we recommend integrating with the Discovery API instead. -- [Webhooks](/docs/deploy/webhooks) +- [Webhooks](https://docs.getdbt.com/docs/deploy/webhooks) - **Overview** — Outbound webhooks can send notifications about your dbt Cloud jobs to other systems. These webhooks allow you to get the latest information about your dbt jobs in real time. -- [Semantic Layers/Metrics](/docs/dbt-cloud-apis/sl-api-overview) - - **Overview** — Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](/guides/dbt-ecosystem/sl-partner-integration-guide). +- [Semantic Layers/Metrics](https://docs.getdbt.com/docs/dbt-cloud-apis/sl-api-overview) + - **Overview** — Our Semantic Layer is made up of two parts: metrics definitions and the ability to interactively query the dbt metrics. For more details, here is a [basic overview](https://docs.getdbt.com/docs/use-dbt-semantic-layer/dbt-sl) and [our best practices](https://docs.getdbt.com/guides/dbt-ecosystem/sl-partner-integration-guide). - Metrics definitions can be pulled from the Discovery API (linked above) or the Semantic Layer Driver/GraphQL API. The key difference is that the Discovery API isn't able to pull the semantic graph, which provides the list of available dimensions that one can query per metric. That is only available with the SL Driver/APIs. The trade-off is that the SL Driver/APIs doesn't have access to the lineage of the entire dbt project (that is, how the dbt metrics dependencies on dbt models). - Three integration points are available for the Semantic Layer API. ## dbt Cloud hosting and authentication -To use the dbt Cloud APIs, you'll need access to the customer’s access urls. Depending on their dbt Cloud setup, they'll have a different access URL. To find out more, refer to [Regions & IP addresses](/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own URL to simplify support. +To use the dbt Cloud APIs, you'll need access to the customer’s access urls. Depending on their dbt Cloud setup, they'll have a different access URL. To find out more, refer to [Regions & IP addresses](https://docs.getdbt.com/docs/cloud/about-cloud/regions-ip-addresses) to understand all the possible configurations. My recommendation is to allow the customer to provide their own URL to simplify support. If the customer is on an Azure single tenant instance, they don't currently have access to the Discovery API or the Semantic Layer APIs. -For authentication, we highly recommend that your integration uses account service tokens. You can read more about [how to create a service token and what permission sets to provide it](/docs/dbt-cloud-apis/service-tokens). Please note that depending on their plan type, they'll have access to different permission sets. We _do not_ recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with. +For authentication, we highly recommend that your integration uses account service tokens. You can read more about [how to create a service token and what permission sets to provide it](https://docs.getdbt.com/docs/dbt-cloud-apis/service-tokens). Please note that depending on their plan type, they'll have access to different permission sets. We _do not_ recommend that users supply their user bearer tokens for authentication. This can cause issues if the user leaves the organization and provides you access to all the dbt Cloud accounts associated to the user rather than just the account (and related projects) that they want to integrate with. ## Potential use cases @@ -56,15 +56,15 @@ For authentication, we highly recommend that your integration uses account servi - **Integration points** — Webhooks and/or Admin API - dbt lineage - **Desired action** — You want to interpolate the dbt lineage metadata into your tool. - - **Example** — In your tool, you want to pull in the dbt DAG into your lineage diagram. For details on what you could pull and how to do this, refer to [Use cases and examples for the Discovery API](/docs/dbt-cloud-apis/discovery-use-cases-and-examples). + - **Example** — In your tool, you want to pull in the dbt DAG into your lineage diagram. For details on what you could pull and how to do this, refer to [Use cases and examples for the Discovery API](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples). - **Integration points** — Discovery API - dbt environment/job metadata - **Desired action** — You want to interpolate the dbt Cloud job information into your tool, including the status of the jobs, the status of the tables executed in the run, what tests passed, etc. - - **Example** — In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. For details on what you could pull and how to do this, refer to [What's the latest state of each model](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model). + - **Example** — In your Business Intelligence tool, stakeholders select from tables that a dbt model created. You show the last time the model passed its tests/last run to show that the tables are current and can be trusted. For details on what you could pull and how to do this, refer to [What's the latest state of each model](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#whats-the-latest-state-of-each-model). - **Integration points** — Discovery API - dbt model documentation - **Desired action** — You want to interpolate the dbt project Information, including model descriptions, column descriptions, etc. - - **Example** — You want to extract the dbt model description so you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. For details on what you could pull and how to do this, refer to [What does this dataset and its columns mean](/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean). + - **Example** — You want to extract the dbt model description so you can display and help the stakeholder understand what they are selecting from. This way, the creators can easily pass on the information without updating another system. For details on what you could pull and how to do this, refer to [What does this dataset and its columns mean](https://docs.getdbt.com/docs/dbt-cloud-apis/discovery-use-cases-and-examples#what-does-this-dataset-and-its-columns-mean). - **Integration points** — Discovery API dbt Core only users will have no access to the above integration points. For dbt metadata, oftentimes our partners will create a dbt Core integration by using the [dbt artifact](https://www.getdbt.com/product/semantic-layer/) files generated by each run and provided by the user. With the Discovery API, we are providing a dynamic way to get the latest information parsed out for you. From 4fa9d370ce42701a72dcd8b457e87cdc5e2fdbaa Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Wed, 20 Dec 2023 15:14:13 -0800 Subject: [PATCH 102/204] Update website/blog/2023-12-20-partner-integration-guide.md --- website/blog/2023-12-20-partner-integration-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/2023-12-20-partner-integration-guide.md b/website/blog/2023-12-20-partner-integration-guide.md index 1c1ea8f893c..b546f258f6c 100644 --- a/website/blog/2023-12-20-partner-integration-guide.md +++ b/website/blog/2023-12-20-partner-integration-guide.md @@ -23,7 +23,7 @@ Here I'll cover how to get started, potential use cases you want to solve for, a ## New to dbt Cloud? -If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](https://docs.getdbt.com/guides) after reading [What is dbt?](https://docs.getdbt.com/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. +If you're new to dbt and dbt Cloud, we recommend you and your software developers try our [Getting Started Quickstarts](https://docs.getdbt.com/guides) after reading [What is dbt](https://docs.getdbt.com/docs/introduction). The documentation will help you familiarize yourself with how our users interact with dbt. By going through this, you will also create a sample dbt project to test your integration. If you require a partner dbt Cloud account to test on, we can upgrade an existing account or a trial account. This account may only be used for development, training, and demonstration purposes. Please contact your partner manager if you're interested and provide the account ID (provided in the URL). Our partner account includes all of the enterprise level functionality and can be provided with a signed partnerships agreement. From 2ef07fa1fbd0e8259a0eb7c9a4c923ccf9fad102 Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Wed, 20 Dec 2023 15:22:29 -0800 Subject: [PATCH 103/204] Feedback --- website/snippets/_cloud-environments-info.md | 1 + 1 file changed, 1 insertion(+) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 6400b29ea9f..50f321cfd96 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -44,6 +44,7 @@ dbt Cloud caches your project's Git repo after each successful run and retains i Below lists the situations when dbt Cloud uses the cached copy: +- Outages from third-party services (for example, your Git provider) - Git authentication fails. - There are syntax errors in the `packages.yml` file. To catch these errors sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). - A package is incompatible with the dbt version being used. To catch this incompatibility sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). From 67e27374444e3e4151a3d81e3b1411aab4b04e5d Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Wed, 20 Dec 2023 15:23:20 -0800 Subject: [PATCH 104/204] Missing a period --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 50f321cfd96..01f4d8eb35e 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -44,7 +44,7 @@ dbt Cloud caches your project's Git repo after each successful run and retains i Below lists the situations when dbt Cloud uses the cached copy: -- Outages from third-party services (for example, your Git provider) +- Outages from third-party services (for example, your Git provider). - Git authentication fails. - There are syntax errors in the `packages.yml` file. To catch these errors sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). - A package is incompatible with the dbt version being used. To catch this incompatibility sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). From 7f5890dc973e090483b92f9d95646d90541e4309 Mon Sep 17 00:00:00 2001 From: Jordan Stein Date: Wed, 20 Dec 2023 16:40:01 -0800 Subject: [PATCH 105/204] update group by items call out --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 96a1e20fc6b..1b01a93fefd 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -35,6 +35,7 @@ The following are updates for the dbt Semantic Layer and MetricFlow: ## New features -- Simplified group-by-item requests — Improved support for ambiguous group-by-item resolution. Previously, you need to specify them in detail, like `guest__listing__created_at__month`. This indicates a monthly `created_at` time dimension, linked by `guest` and `listing` entities. +- Simplified group-by-item requests. We updated the way the MetricFlow query resolver finds queryable dimensions for metrics. The main improvements ares: + - If the grain of a time dimension in a query is not specified, then the grain of the requested time dimension is resolved to be the finest grain that is available for the queried metrics. For example, say you have two metrics; revenue which has a weekly grain and orders which has a daily grain. If you query these metrics like this: `dbt sl query --metrics revenue,orders --group-by metric_time` metricflow will automatically query these metrics at a weekly grain. - Now you can use a shorter form, like ` listing__created_at__month`. If there's only one way to interpret this, dbt will resolve it automatically. If multiple interpretations are possible, dbt will ask for more details from the user. +- In a metric filter, if an ambiguous time dimension does not specify the grain, and all semantic models that are used to compute the metric define the time dimension with the same grain, MetricFlow should assume the specific time dimension is that grain. For example, say I have two metrics; revenue and users which are both daily. I can query these metrics without sepcifying the time dimension grain in the filte i.e `mf query --metrics users,revenue --group-by metric_time --where "{{ TimeDimension('metric_time') }} = '2017-07-30' "` From 1dad0f0b023eef9f75af30fbf97d052acef808de Mon Sep 17 00:00:00 2001 From: Damian Owsianny Date: Thu, 21 Dec 2023 10:37:14 +0100 Subject: [PATCH 106/204] Add on_table_exists 'replace' option to Starburst/Trino --- website/docs/reference/resource-configs/trino-configs.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/trino-configs.md b/website/docs/reference/resource-configs/trino-configs.md index 21df13feac4..9ee62959f76 100644 --- a/website/docs/reference/resource-configs/trino-configs.md +++ b/website/docs/reference/resource-configs/trino-configs.md @@ -97,8 +97,9 @@ The `dbt-trino` adapter supports these modes in `table` materialization, which y - `rename` — Creates an intermediate table, renames the target table to the backup one, and renames the intermediate table to the target one. - `drop` — Drops and re-creates a table. This overcomes the table rename limitation in AWS Glue. +- `replace` — Replaces a table using CREATE OR REPLACE clause. Support for table replacement varies across connectors. Refer to the connector documentation for details. -The recommended `table` materialization uses `on_table_exists = 'rename'` and is also the default. You can change this default configuration by editing _one_ of these files: +If CREATE OR REPLACE is supported in underlying connector, `replace` is recommended option. Otherwise, the recommended `table` materialization uses `on_table_exists = 'rename'` and is also the default. You can change this default configuration by editing _one_ of these files: - the SQL file for your model - the `dbt_project.yml` configuration file From 78f1667ce011f9b08fff78b6cb05f1b1c61454ee Mon Sep 17 00:00:00 2001 From: Damian Owsianny Date: Thu, 21 Dec 2023 10:38:08 +0100 Subject: [PATCH 107/204] Add new author of Starburst/Trino --- website/docs/docs/core/connect-data-platform/trino-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/trino-setup.md b/website/docs/docs/core/connect-data-platform/trino-setup.md index a7dc658358f..bb36bb11a01 100644 --- a/website/docs/docs/core/connect-data-platform/trino-setup.md +++ b/website/docs/docs/core/connect-data-platform/trino-setup.md @@ -4,7 +4,7 @@ description: "Read this guide to learn about the Starburst/Trino warehouse setup id: "trino-setup" meta: maintained_by: Starburst Data, Inc. - authors: Marius Grama, Przemek Denkiewicz, Michiel de Smet + authors: Marius Grama, Przemek Denkiewicz, Michiel de Smet, Damian Owsianny github_repo: 'starburstdata/dbt-trino' pypi_package: 'dbt-trino' min_core_version: 'v0.20.0' From 5786dcdfde5ab99b5edbaaea895c7cc6e874f6d5 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Thu, 21 Dec 2023 10:33:31 -0500 Subject: [PATCH 108/204] add hover function + disclaimer css --- website/src/components/faqs/index.js | 53 +++++++++++++------ website/src/components/faqs/styles.module.css | 6 +++ 2 files changed, 43 insertions(+), 16 deletions(-) diff --git a/website/src/components/faqs/index.js b/website/src/components/faqs/index.js index 52c4573d883..58b59227cfb 100644 --- a/website/src/components/faqs/index.js +++ b/website/src/components/faqs/index.js @@ -3,10 +3,10 @@ import styles from './styles.module.css'; import { usePluginData } from '@docusaurus/useGlobalData'; function FAQ({ path, alt_header = null }) { - const [isOn, setOn] = useState(false); - const [filePath, setFilePath] = useState(path) - const [fileContent, setFileContent] = useState({}) + const [filePath, setFilePath] = useState(path); + const [fileContent, setFileContent] = useState({}); + const [hoverTimeout, setHoverTimeout] = useState(null); // Get all faq file paths from plugin const { faqFiles } = usePluginData('docusaurus-build-global-data-plugin'); @@ -37,24 +37,45 @@ function FAQ({ path, alt_header = null }) { } }, [filePath]) - const toggleOn = function () { - setOn(!isOn); + const handleMouseEnter = () => { + setHoverTimeout(setTimeout(() => { + setOn(true); + }, 500)); + }; + + const handleMouseLeave = () => { + if (!isOn) { + clearTimeout(hoverTimeout); + setOn(false); } +}; + + useEffect(() => { + return () => { + if (hoverTimeout) { + clearTimeout(hoverTimeout); + } + }; + }, [hoverTimeout]); + + const toggleOn = () => { + if (hoverTimeout) { + clearTimeout(hoverTimeout); + } + setOn(!isOn); + }; return ( -
+
- -   - {alt_header || fileContent?.meta && fileContent.meta.title} - -
- {fileContent?.contents && fileContent.contents} + +  {alt_header || (fileContent?.meta && fileContent.meta.title)} + Hover to view + +
+ {fileContent?.contents}
-
+
); } diff --git a/website/src/components/faqs/styles.module.css b/website/src/components/faqs/styles.module.css index e19156a3a7b..9ce7d4d8a40 100644 --- a/website/src/components/faqs/styles.module.css +++ b/website/src/components/faqs/styles.module.css @@ -24,6 +24,12 @@ filter: invert(1); } +:local(.disclaimer) { + font-size: 0.8em; + color: #666; + margin-left: 10px; /* Adjust as needed */ +} + :local(.body) { margin-left: 2em; margin-bottom: 10px; From 79202d5bf28047c8593161d553bd16a8e39ac719 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 11:55:09 -0500 Subject: [PATCH 109/204] Adding multi cell migration page --- website/docs/docs/cloud/migration.md | 41 ++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) create mode 100644 website/docs/docs/cloud/migration.md diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md new file mode 100644 index 00000000000..7fa91000389 --- /dev/null +++ b/website/docs/docs/cloud/migration.md @@ -0,0 +1,41 @@ +--- +title: "Multi-cell migration checklist" +id: migration +description: "Prepare for account migration to AWS cell based architecture." +pagination_next: null +pagination_prev: null +--- + +dbt Labs is in the process of migrating our U.S. based multi-tenant accounts to [AWS cell-based architecture](https://docs.aws.amazon.com/wellarchitected/latest/reducing-scope-of-impact-with-cell-based-architecture/what-is-a-cell-based-architecture.html), a critical component of the [AWS well-architected framework](https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&wa-lens-whitepapers.sort-order=desc&wa-guidance-whitepapers.sort-by=item.additionalFields.sortDate&wa-guidance-whitepapers.sort-order=desc). The benefits of the cell-based architecture will improve the performance, reliability, and security of your dbt Cloud environment, but there is some preparation required to ensure a successful migration. + +This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access. + +### What’s changing? Pre-migration checklist. + +Prior to your migration date, your account admin will need to make some changes to your dbt Cloud account. + +If your account has been scheduled for migration, upon login, you will see a banner indicating your migration date. If you do not see a banner, you do not need to take any action. + +1. **IP Addresses** — dbt Cloud has new IPs that will be used to access your warehouse after the migration. Make sure to allow inbound traffic from these IPs in your firewall, and include it in any database grants. All six of the IPs below should be added to allowlists. + * Old IPs: `52.45.144.63`, `54.81.134.249`, `52.22.161.231` + * New IPs: `52.3.77.232`, `3.214.191.130`, `34.233.79.135` +2. **APIs and integrations** — Each dbt Cloud account will be allocated a static Access URL like: `aa000.us1.dbt.com`. You should begin migrating your API access and partner integrations to use the new static subdomain as soon as possible. You can find your Access URL on: + * Any page where you generate or manage API tokens. + * The **Account Settings** > **Account page**. + + :::important Multiple account access + Each account for which you have access will have a different, dedicated [Access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account)! + ::: + +3. **IDE sessions** — Any uncommitted changes in the IDE may be lost during the migration process. We _strongly_ encourage you to commit all changes in the IDE before your scheduled migration time. +4. **User invitations** — Any pending user invitations will be invalidated during the migration. You can re-send the invitations once the migration is complete. +5. **Git Integrations** — Integrations with Github, Gitlab, and Azure DevOps will need to be manually updated. We are not migrating any accounts using these integrations at this time. If you are using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration. +6. **SSO Integrations** — Integrations with SSO IdPs will need to be manually updated. We are not migrating any accounts using SSO at this time; if you are using one of these integrations and your account is scheduled for migration, please contact support, and we will delay your migration. + +### Post-migration + +After migration, if you completed all of the checklist items above, your dbt Cloud resources and jobs will continue to work as they did before. + +You have the option to log into dbt Cloud at a different URL: + * If you were previously logging in at `cloud.getdbt.com`, you should instead plan to login at `us1.dbt.com`. The original URL will still work, but you’ll have to click through to be redirected upon login. + * You may also log in directly with your account’s unique [Access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account). \ No newline at end of file From ecdbc83acf9c69381ddc1ca1af7fed781a69c620 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:21:55 -0500 Subject: [PATCH 110/204] Update website/docs/docs/cloud/migration.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 7fa91000389..1cb2e33c364 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -10,7 +10,7 @@ dbt Labs is in the process of migrating our U.S. based multi-tenant accounts to This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access. -### What’s changing? Pre-migration checklist. +## Premigration checklist Prior to your migration date, your account admin will need to make some changes to your dbt Cloud account. From 4b9fac73daa280f68843e8d315932189b010228c Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:22:10 -0500 Subject: [PATCH 111/204] Update website/docs/docs/cloud/migration.md Co-authored-by: Connor McArthur --- website/docs/docs/cloud/migration.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 1cb2e33c364..6d207f5b613 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -6,7 +6,11 @@ pagination_next: null pagination_prev: null --- -dbt Labs is in the process of migrating our U.S. based multi-tenant accounts to [AWS cell-based architecture](https://docs.aws.amazon.com/wellarchitected/latest/reducing-scope-of-impact-with-cell-based-architecture/what-is-a-cell-based-architecture.html), a critical component of the [AWS well-architected framework](https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&wa-lens-whitepapers.sort-order=desc&wa-guidance-whitepapers.sort-by=item.additionalFields.sortDate&wa-guidance-whitepapers.sort-order=desc). The benefits of the cell-based architecture will improve the performance, reliability, and security of your dbt Cloud environment, but there is some preparation required to ensure a successful migration. +dbt Labs is in the process of migrating dbt Cloud to a new **cell-based architecture**. This architecture will be the foundation of dbt Cloud for years to come, and will bring improved **scalability**, **reliability**, and **security** to all customers and users of dbt Cloud. + +There is some preparation required to ensure a successful migration. + +Migrations are being scheduled on a per-account basis. **If you have not received any communication (either via a banner, or via an email) about a migration date, you do not need to take any action at this time.** Our team will share a specific migration date with you, with appropriate advance notice, before we complete any migration steps in the dbt Cloud backend. This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access. From f6fe368112e85737e4ee9cfcf44af1fe98283ab6 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:22:29 -0500 Subject: [PATCH 112/204] Update website/docs/docs/cloud/migration.md --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 6d207f5b613..f989536abaf 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -14,7 +14,7 @@ Migrations are being scheduled on a per-account basis. **If you have not receive This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access. -## Premigration checklist +## Pre-migration checklist Prior to your migration date, your account admin will need to make some changes to your dbt Cloud account. From 8774c56f87c5908b25e6273f6c8cf3357fb4d298 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:22:47 -0500 Subject: [PATCH 113/204] Update website/docs/docs/cloud/migration.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index f989536abaf..4f33a67565e 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -16,7 +16,7 @@ This document outlines the steps that you must take to prevent service disruptio ## Pre-migration checklist -Prior to your migration date, your account admin will need to make some changes to your dbt Cloud account. +Prior to your migration date, your dbt Cloud account admin will need to make some changes to your account. If your account has been scheduled for migration, upon login, you will see a banner indicating your migration date. If you do not see a banner, you do not need to take any action. From cd8fcea8dce4644988c78877a7384edf247a863f Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:24:14 -0500 Subject: [PATCH 114/204] Apply suggestions from code review Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/cloud/migration.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 4f33a67565e..ead08ff1a82 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -23,18 +23,18 @@ If your account has been scheduled for migration, upon login, you will see a ban 1. **IP Addresses** — dbt Cloud has new IPs that will be used to access your warehouse after the migration. Make sure to allow inbound traffic from these IPs in your firewall, and include it in any database grants. All six of the IPs below should be added to allowlists. * Old IPs: `52.45.144.63`, `54.81.134.249`, `52.22.161.231` * New IPs: `52.3.77.232`, `3.214.191.130`, `34.233.79.135` -2. **APIs and integrations** — Each dbt Cloud account will be allocated a static Access URL like: `aa000.us1.dbt.com`. You should begin migrating your API access and partner integrations to use the new static subdomain as soon as possible. You can find your Access URL on: +2. **APIs and integrations** — Each dbt Cloud account will be allocated a static access URL like: `aa000.us1.dbt.com`. You should begin migrating your API access and partner integrations to use the new static subdomain as soon as possible. You can find your access URL on: * Any page where you generate or manage API tokens. * The **Account Settings** > **Account page**. :::important Multiple account access - Each account for which you have access will have a different, dedicated [Access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account)! + Be careful, each account that you have access to will have a different, dedicated [access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account). ::: -3. **IDE sessions** — Any uncommitted changes in the IDE may be lost during the migration process. We _strongly_ encourage you to commit all changes in the IDE before your scheduled migration time. -4. **User invitations** — Any pending user invitations will be invalidated during the migration. You can re-send the invitations once the migration is complete. -5. **Git Integrations** — Integrations with Github, Gitlab, and Azure DevOps will need to be manually updated. We are not migrating any accounts using these integrations at this time. If you are using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration. -6. **SSO Integrations** — Integrations with SSO IdPs will need to be manually updated. We are not migrating any accounts using SSO at this time; if you are using one of these integrations and your account is scheduled for migration, please contact support, and we will delay your migration. +3. **IDE sessions** — Any uncommitted changes in the IDE might be lost during the migration process. dbt Labs _strongly_ encourages you to commit all changes in the IDE before your scheduled migration time. +4. **User invitations** — Any pending user invitations will be invalidated during the migration. You can resend the invitations once the migration is complete. +5. **Git integrations** — Integrations with GitHub, GitLab, and Azure DevOps will need to be manually updated. dbt Labs will not be migrating any accounts using these integrations at this time. If you're using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration. +6. **SSO integrations** — Integrations with SSO identity providers (IdPs) will need to be manually updated. dbt Labs will not be migrating any accounts using SSO at this time. If you're using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration. ### Post-migration From 67e3fcbc6a6402f5f3aa3c660f33c8e911f03264 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:24:54 -0500 Subject: [PATCH 115/204] Update website/docs/docs/cloud/migration.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index ead08ff1a82..6a957d57127 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -36,7 +36,7 @@ If your account has been scheduled for migration, upon login, you will see a ban 5. **Git integrations** — Integrations with GitHub, GitLab, and Azure DevOps will need to be manually updated. dbt Labs will not be migrating any accounts using these integrations at this time. If you're using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration. 6. **SSO integrations** — Integrations with SSO identity providers (IdPs) will need to be manually updated. dbt Labs will not be migrating any accounts using SSO at this time. If you're using one of these integrations and your account is scheduled for migration, please contact support and we will delay your migration. -### Post-migration +## Post-migration After migration, if you completed all of the checklist items above, your dbt Cloud resources and jobs will continue to work as they did before. From b697ecdc9cd195c0f7e927aeb5e31b9a99bd456f Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:26:35 -0500 Subject: [PATCH 116/204] Update website/docs/docs/cloud/migration.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 6a957d57127..8c8a3375b61 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -42,4 +42,4 @@ After migration, if you completed all of the checklist items above, your dbt Clo You have the option to log into dbt Cloud at a different URL: * If you were previously logging in at `cloud.getdbt.com`, you should instead plan to login at `us1.dbt.com`. The original URL will still work, but you’ll have to click through to be redirected upon login. - * You may also log in directly with your account’s unique [Access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account). \ No newline at end of file + * You may also log in directly with your account’s unique [access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account). \ No newline at end of file From 72c55578ba3d813e6eff38669e6104d3c7c5d17b Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:27:04 -0500 Subject: [PATCH 117/204] Update website/docs/docs/cloud/migration.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 8c8a3375b61..2081a0a3096 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -38,7 +38,7 @@ If your account has been scheduled for migration, upon login, you will see a ban ## Post-migration -After migration, if you completed all of the checklist items above, your dbt Cloud resources and jobs will continue to work as they did before. +After migration, if you completed all the [Pre-migration checklist](#pre-migration-checklist) items, your dbt Cloud resources and jobs will continue to work as they did before. You have the option to log into dbt Cloud at a different URL: * If you were previously logging in at `cloud.getdbt.com`, you should instead plan to login at `us1.dbt.com`. The original URL will still work, but you’ll have to click through to be redirected upon login. From 01c1712e57c966d700ba75ca1054ec9aafb4fe3e Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:27:23 -0500 Subject: [PATCH 118/204] Apply suggestions from code review Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/cloud/migration.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 2081a0a3096..bd7cbffe913 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -18,9 +18,9 @@ This document outlines the steps that you must take to prevent service disruptio Prior to your migration date, your dbt Cloud account admin will need to make some changes to your account. -If your account has been scheduled for migration, upon login, you will see a banner indicating your migration date. If you do not see a banner, you do not need to take any action. +If your account is scheduled for migration, you will see a banner indicating your migration date when you log in. If you don't see a banner, you don't need to take any action. -1. **IP Addresses** — dbt Cloud has new IPs that will be used to access your warehouse after the migration. Make sure to allow inbound traffic from these IPs in your firewall, and include it in any database grants. All six of the IPs below should be added to allowlists. +1. **IP addresses** — dbt Cloud will be using new IPs to access your warehouse after the migration. Make sure to allow inbound traffic from these IPs in your firewall and include it in any database grants. All six of the IPs below should be added to allowlists. * Old IPs: `52.45.144.63`, `54.81.134.249`, `52.22.161.231` * New IPs: `52.3.77.232`, `3.214.191.130`, `34.233.79.135` 2. **APIs and integrations** — Each dbt Cloud account will be allocated a static access URL like: `aa000.us1.dbt.com`. You should begin migrating your API access and partner integrations to use the new static subdomain as soon as possible. You can find your access URL on: From c123eb1542fa96e3fc503c7cae66f0e04ec92f80 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:33:22 -0500 Subject: [PATCH 119/204] Update website/docs/docs/cloud/migration.md --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index bd7cbffe913..58b4ed8c530 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -10,7 +10,7 @@ dbt Labs is in the process of migrating dbt Cloud to a new **cell-based architec There is some preparation required to ensure a successful migration. -Migrations are being scheduled on a per-account basis. **If you have not received any communication (either via a banner, or via an email) about a migration date, you do not need to take any action at this time.** Our team will share a specific migration date with you, with appropriate advance notice, before we complete any migration steps in the dbt Cloud backend. +Migrations are being scheduled on a per-account basis. _If you have not received any communication (either via a banner or email notification) about a migration date, you do not need to take any action at this time._ dbt Labs will share migration date information with you, with appropriate advance notice, before we complete any migration steps in the dbt Cloud backend. This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access. From 465238eacee54ae3dc6174d20548c6bd70bcb6c7 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:36:06 -0500 Subject: [PATCH 120/204] Update website/docs/docs/cloud/migration.md --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 58b4ed8c530..0ec9e4c4d26 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -40,6 +40,6 @@ If your account is scheduled for migration, you will see a banner indicating you After migration, if you completed all the [Pre-migration checklist](#pre-migration-checklist) items, your dbt Cloud resources and jobs will continue to work as they did before. -You have the option to log into dbt Cloud at a different URL: +You have the option to log in to dbt Cloud at a different URL: * If you were previously logging in at `cloud.getdbt.com`, you should instead plan to login at `us1.dbt.com`. The original URL will still work, but you’ll have to click through to be redirected upon login. * You may also log in directly with your account’s unique [access URL](https://next.docs.getdbt.com/docs/cloud/about-cloud/access-regions-ip-addresses#accessing-your-account). \ No newline at end of file From 4c4bb5b7ef39a404ea1849d73e3f45dcb927b804 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 14:03:41 -0500 Subject: [PATCH 121/204] Update website/docs/docs/cloud/migration.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 0ec9e4c4d26..12b4026ff9f 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -10,7 +10,7 @@ dbt Labs is in the process of migrating dbt Cloud to a new **cell-based architec There is some preparation required to ensure a successful migration. -Migrations are being scheduled on a per-account basis. _If you have not received any communication (either via a banner or email notification) about a migration date, you do not need to take any action at this time._ dbt Labs will share migration date information with you, with appropriate advance notice, before we complete any migration steps in the dbt Cloud backend. +Migrations are being scheduled on a per-account basis. _If you haven't received any communication (either with a banner or by email) about a migration date, you don't need to take any action at this time._ dbt Labs will share migration date information with you, with appropriate advance notice, before we complete any migration steps in the dbt Cloud backend. This document outlines the steps that you must take to prevent service disruptions before your environment is migrated over to the cell-based architecture. This will impact areas such as login, IP restrictions, and API access. From 9fd5e551a5b2f3cbc6e242f8d2d0298fe92945a9 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 14:06:18 -0500 Subject: [PATCH 122/204] Update website/docs/docs/cloud/migration.md --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 12b4026ff9f..69804aa9cd0 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -1,7 +1,7 @@ --- title: "Multi-cell migration checklist" id: migration -description: "Prepare for account migration to AWS cell based architecture." +description: "Prepare for account migration to AWS cell-based architecture." pagination_next: null pagination_prev: null --- From 4caa9558fc068cbab8ecd9f8a77daa3c7c6ea11a Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Thu, 21 Dec 2023 14:06:27 -0500 Subject: [PATCH 123/204] Update website/docs/docs/cloud/migration.md Co-authored-by: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> --- website/docs/docs/cloud/migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/migration.md b/website/docs/docs/cloud/migration.md index 69804aa9cd0..0c43a287bbe 100644 --- a/website/docs/docs/cloud/migration.md +++ b/website/docs/docs/cloud/migration.md @@ -6,7 +6,7 @@ pagination_next: null pagination_prev: null --- -dbt Labs is in the process of migrating dbt Cloud to a new **cell-based architecture**. This architecture will be the foundation of dbt Cloud for years to come, and will bring improved **scalability**, **reliability**, and **security** to all customers and users of dbt Cloud. +dbt Labs is in the process of migrating dbt Cloud to a new _cell-based architecture_. This architecture will be the foundation of dbt Cloud for years to come, and will bring improved scalability, reliability, and security to all customers and users of dbt Cloud. There is some preparation required to ensure a successful migration. From b839ea5b9c749300db47d14460a912e43860ea99 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 21 Dec 2023 15:49:54 -0500 Subject: [PATCH 124/204] Update website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 1b01a93fefd..dbdf66c8e64 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -38,4 +38,6 @@ The following are updates for the dbt Semantic Layer and MetricFlow: - Simplified group-by-item requests. We updated the way the MetricFlow query resolver finds queryable dimensions for metrics. The main improvements ares: - If the grain of a time dimension in a query is not specified, then the grain of the requested time dimension is resolved to be the finest grain that is available for the queried metrics. For example, say you have two metrics; revenue which has a weekly grain and orders which has a daily grain. If you query these metrics like this: `dbt sl query --metrics revenue,orders --group-by metric_time` metricflow will automatically query these metrics at a weekly grain. -- In a metric filter, if an ambiguous time dimension does not specify the grain, and all semantic models that are used to compute the metric define the time dimension with the same grain, MetricFlow should assume the specific time dimension is that grain. For example, say I have two metrics; revenue and users which are both daily. I can query these metrics without sepcifying the time dimension grain in the filte i.e `mf query --metrics users,revenue --group-by metric_time --where "{{ TimeDimension('metric_time') }} = '2017-07-30' "` +- Assumes time dimension grain: When using a metric filter, if an ambiguous time dimension doesn't specify the grain, and all used semantic models define this time dimension with the same grain, MetricFlow now automatically assumes the time dimension to be of that grain. + - For example, if you have two daily metrics: `revenue` and `users` — you can now query these metrics without specifying the time dimension grain in the filter: `mf query --metrics users,revenue --group-by metric_time --where "{{ TimeDimension('metric_time') }} = '2017-07-30' "` + From 224ce876c19ffb86a0826c004fda0a2c2b79e2cf Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 21 Dec 2023 15:53:40 -0500 Subject: [PATCH 125/204] Update website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index dbdf66c8e64..be8e5e8e61f 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -7,7 +7,7 @@ date: 2023-12-22 --- The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer and MetricFlow. -Refer to the following updates and fixes for December 2023: +Refer to the following updates and fixes for December 2023. ## Bug fixes From a3c0ec7f1de71bcdb69c2433c610f9c502afebbf Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Thu, 21 Dec 2023 12:59:23 -0800 Subject: [PATCH 126/204] Update website/snippets/_cloud-environments-info.md Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 01f4d8eb35e..c9886a2eb94 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -42,7 +42,7 @@ For improved reliability and performance on your job runs, you can enable dbt Cl dbt Cloud caches your project's Git repo after each successful run and retains it for 8 days if there are no repo updates. It caches all packages regardless of installation method and does not fetch code outside of the job runs. -Below lists the situations when dbt Cloud uses the cached copy: +dbt Cloud will use the cached copy of your project's Git repo under these circumstances: - Outages from third-party services (for example, your Git provider). - Git authentication fails. From 29303c3040ca9d87c9dcd3e42b32552feeb83bbf Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:00:20 -0800 Subject: [PATCH 127/204] Update website/snippets/_cloud-environments-info.md Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index c9886a2eb94..aedec779a58 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -44,7 +44,7 @@ dbt Cloud caches your project's Git repo after each successful run and retains i dbt Cloud will use the cached copy of your project's Git repo under these circumstances: -- Outages from third-party services (for example, your Git provider). +- Outages from third-party services (for example, the [dbt package hub](https://hub.getdbt.com/)). - Git authentication fails. - There are syntax errors in the `packages.yml` file. To catch these errors sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). - A package is incompatible with the dbt version being used. To catch this incompatibility sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). From d3bcee988173b9f7b15a850ffb01dab190ff9893 Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:04:19 -0800 Subject: [PATCH 128/204] Update website/snippets/_cloud-environments-info.md Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index aedec779a58..101fa5d409e 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -46,7 +46,7 @@ dbt Cloud will use the cached copy of your project's Git repo under these circum - Outages from third-party services (for example, the [dbt package hub](https://hub.getdbt.com/)). - Git authentication fails. -- There are syntax errors in the `packages.yml` file. To catch these errors sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). +- There are syntax errors in the `packages.yml` file. You can set up and use [continuous integration (CI)](/docs/deploy/continuous-integration) to find these errors earlier. - A package is incompatible with the dbt version being used. To catch this incompatibility sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). To enable Git repository caching, select **Account settings** from the gear menu and enable the **Repository caching** option. From 61630d8d673d9245e634fb88e377a29f60b76d39 Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:07:46 -0800 Subject: [PATCH 129/204] Update website/snippets/_cloud-environments-info.md Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 101fa5d409e..4dc5cfee003 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -47,7 +47,7 @@ dbt Cloud will use the cached copy of your project's Git repo under these circum - Outages from third-party services (for example, the [dbt package hub](https://hub.getdbt.com/)). - Git authentication fails. - There are syntax errors in the `packages.yml` file. You can set up and use [continuous integration (CI)](/docs/deploy/continuous-integration) to find these errors earlier. -- A package is incompatible with the dbt version being used. To catch this incompatibility sooner, set up and use [continuous integration (CI)](/docs/deploy/continuous-integration). +- If a package doesn't work with the current dbt version. You can set up and use [continuous integration (CI)](/docs/deploy/continuous-integration) to identify this issue sooner. To enable Git repository caching, select **Account settings** from the gear menu and enable the **Repository caching** option. From eba18680f6a13fa9ab960af8ec67a9c402098c17 Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Thu, 21 Dec 2023 13:08:06 -0800 Subject: [PATCH 130/204] Update website/snippets/_cloud-environments-info.md --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 4dc5cfee003..6b6eb1c2761 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -46,7 +46,7 @@ dbt Cloud will use the cached copy of your project's Git repo under these circum - Outages from third-party services (for example, the [dbt package hub](https://hub.getdbt.com/)). - Git authentication fails. -- There are syntax errors in the `packages.yml` file. You can set up and use [continuous integration (CI)](/docs/deploy/continuous-integration) to find these errors earlier. +- There are syntax errors in the `packages.yml` file. You can set up and use [continuous integration (CI)](/docs/deploy/continuous-integration) to find these errors sooner. - If a package doesn't work with the current dbt version. You can set up and use [continuous integration (CI)](/docs/deploy/continuous-integration) to identify this issue sooner. To enable Git repository caching, select **Account settings** from the gear menu and enable the **Repository caching** option. From fe96ae5f9c3999deabf2c5e91476fe33fd0ad182 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 21 Dec 2023 16:21:12 -0500 Subject: [PATCH 131/204] Update website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> --- .../how-we-build-our-metrics/semantic-layer-2-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md index 275395f6b18..0eb3f26ae2c 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md @@ -28,7 +28,7 @@ Next, before you start writing code, you need to install MetricFlow: - Download MetricFlow as an extension of a dbt adapter from PyPI (dbt Core users only). The MetricFlow is compatible with Python versions 3.8 through 3.11. - - **Note**, you'll need to manage versioning between dbt Core, your adapter, and MetricFlow. + - **Note**: You'll need to manage versioning between dbt Core, your adapter, and MetricFlow. - We'll use pip to install MetricFlow and our dbt adapter: ```shell From 294499e3df10f7440c5c14355c1eadff2b8b8bc2 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 21 Dec 2023 16:21:29 -0500 Subject: [PATCH 132/204] Update website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> --- .../how-we-build-our-metrics/semantic-layer-2-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md index 0eb3f26ae2c..7c74b69d859 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md @@ -19,7 +19,7 @@ Next, before you start writing code, you need to install MetricFlow: -- [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) — MetricFlow commands are embedded in the dbt Cloud CLI. This means you can immediately run them once you install the dbt Cloud CLI. Using dbt Cloud means you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning. +- [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) — MetricFlow commands are embedded in the dbt Cloud CLI. You can immediately run them once you install the dbt Cloud CLI. Using dbt Cloud means you won't need to manage versioning — your dbt Cloud account will automatically manage the versioning. - [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) — You can create metrics using MetricFlow in the dbt Cloud IDE. However, support for running MetricFlow commands in the IDE will be available soon. From e225dad6e8fac7fab7d2d35f87debe4bfda34951 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 21 Dec 2023 16:22:15 -0500 Subject: [PATCH 133/204] Update website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> --- .../how-we-build-our-metrics/semantic-layer-2-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md index 7c74b69d859..5fea4fe8695 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md @@ -50,7 +50,7 @@ python -m pip install "dbt-metricflow[adapter name]" git checkout start-here ``` -For more information, refer to the [MetricFlow commands](/docs/build/metricflow-commands) or a [quickstart guides](/guides) to get more familiar with setting up a dbt project. +For more information, refer to the [MetricFlow commands](/docs/build/metricflow-commands) or the [quickstart guides](/guides) to get more familiar with setting up a dbt project. ## Basic commands From 96b59393e8c8256eccd7a1faf3ab3e928a38f1b6 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Thu, 21 Dec 2023 16:39:32 -0500 Subject: [PATCH 134/204] fix pagination --- .../how-we-build-our-metrics/semantic-layer-1-intro.md | 2 ++ .../how-we-build-our-metrics/semantic-layer-2-setup.md | 1 + .../semantic-layer-3-build-semantic-models.md | 1 + .../how-we-build-our-metrics/semantic-layer-4-build-metrics.md | 1 + .../semantic-layer-5-refactor-a-mart.md | 1 + .../semantic-layer-6-advanced-metrics.md | 1 + .../how-we-build-our-metrics/semantic-layer-7-conclusion.md | 1 + 7 files changed, 8 insertions(+) diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md index ee3d4262882..59bdc41a705 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md @@ -2,6 +2,8 @@ title: "Intro to MetricFlow" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow +pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup" +pagination_prev: null --- Flying cars, hoverboards, and true self-service analytics: this is the future we were promised. The first two might still be a few years out, but real self-service analytics is here today. With dbt Cloud's Semantic Layer, you can resolve the tension between accuracy and flexibility that has hampered analytics tools for years, empowering everybody in your organization to explore a shared reality of metrics. Best of all for analytics engineers, building with these new tools will significantly [DRY](https://docs.getdbt.com/terms/dry) up and simplify your codebase. As you'll see, the deep interaction between your dbt models and the Semantic Layer make your dbt project the ideal place to craft your metrics. diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md index 5fea4fe8695..20643391a82 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md @@ -2,6 +2,7 @@ title: "Set up MetricFlow" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow +pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models" --- ## Getting started diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md index a2dc55e37ae..3c33c08874c 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md @@ -2,6 +2,7 @@ title: "Building semantic models" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow +pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics" --- ## How to build a semantic model diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md index da83adbdc69..9f7849299b9 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md @@ -2,6 +2,7 @@ title: "Building metrics" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow +pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart" --- ## How to build metrics diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md index dfdba2941e9..68b42ee6aa4 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md @@ -2,6 +2,7 @@ title: "Refactor an existing mart" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow +pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics" --- ## A new approach diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md index fe7438b5800..92ab444172a 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md @@ -2,6 +2,7 @@ title: "More advanced metrics" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow +pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion" --- ## More advanced metric types diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion.md index a1062721177..1870b6b77e4 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion.md @@ -2,6 +2,7 @@ title: "Best practices" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow +pagination_next: null --- ## Putting it all together From b75662226b7b48c64cfac86169173aab5b72329e Mon Sep 17 00:00:00 2001 From: Jordan Stein Date: Thu, 21 Dec 2023 13:40:35 -0800 Subject: [PATCH 135/204] update release notes --- .../release-notes/74-Dec-2023/dec-sl-updates.md | 17 +---------------- 1 file changed, 1 insertion(+), 16 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index be8e5e8e61f..7491d17c039 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -7,7 +7,7 @@ date: 2023-12-22 --- The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer and MetricFlow. -Refer to the following updates and fixes for December 2023. +Refer to the following updates and fixes for December 2023: ## Bug fixes @@ -20,24 +20,9 @@ The following are updates for the dbt Semantic Layer and MetricFlow: - Memory leak — Fixed a memory leak in the JDBC API that would previously lead to intermittent errors when querying it. - Data conversion support — Added support for converting various Redshift and Postgres-specific data types. Previously, the driver would throw an error when encountering columns with those types. -**MetricFlow** - -- Time offset for nested metrics — Implemented time offset for nested derived and ratio metrics. ([MetricFlow Issue #882](https://github.com/dbt-labs/metricflow/issues/882)) -- SQL column name rendering: — Fixed incorrect SQL column name rendering in `WhereConstraintNode`. ([MetricFlow Issue #908](https://github.com/dbt-labs/metricflow/issues/908)) -- Cumulative metrics query error — Fixed the `Unable To Satisfy Query` error with cumulative metrics in Saved Queries. ([MetricFlow Issue #917](https://github.com/dbt-labs/metricflow/issues/917)) -- Dimension-only query — Fixed a bug in dimension-only queries where the filter column is removed before the filter has been applied. ([MetricFlow Issue #923](https://github.com/dbt-labs/metricflow/issues/923)) -- Where constraint column — Ensured retention of the where constraint column until used for nested derived offset metric queries. ([MetricFlow Issue #930](https://github.com/dbt-labs/metricflow/issues/930)) ## Improvements - Deprecation — We deprecated [dbt Metrics and the legacy dbt Semantic Layer](/docs/dbt-versions/release-notes/Dec-2023/legacy-sl), both supported on dbt version 1.5 or lower. This change came into effect on December 15th, 2023. - Improved dbt converter tool — The [dbt converter tool](https://github.com/dbt-labs/dbt-converter) can now help automate some of the work in converting from LookML (Looker's modeling language) for those who are migrating. Previously this wasn’t available. -## New features - -- Simplified group-by-item requests. We updated the way the MetricFlow query resolver finds queryable dimensions for metrics. The main improvements ares: - - If the grain of a time dimension in a query is not specified, then the grain of the requested time dimension is resolved to be the finest grain that is available for the queried metrics. For example, say you have two metrics; revenue which has a weekly grain and orders which has a daily grain. If you query these metrics like this: `dbt sl query --metrics revenue,orders --group-by metric_time` metricflow will automatically query these metrics at a weekly grain. - -- Assumes time dimension grain: When using a metric filter, if an ambiguous time dimension doesn't specify the grain, and all used semantic models define this time dimension with the same grain, MetricFlow now automatically assumes the time dimension to be of that grain. - - For example, if you have two daily metrics: `revenue` and `users` — you can now query these metrics without specifying the time dimension grain in the filter: `mf query --metrics users,revenue --group-by metric_time --where "{{ TimeDimension('metric_time') }} = '2017-07-30' "` - From b0bb375f2883bf58bdf91c3c353d3034ed2b8656 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Thu, 21 Dec 2023 16:42:41 -0500 Subject: [PATCH 136/204] fi broken links --- .../how-we-build-our-metrics/semantic-layer-1-intro.md | 2 +- .../how-we-build-our-metrics/semantic-layer-2-setup.md | 2 +- .../semantic-layer-3-build-semantic-models.md | 2 +- .../how-we-build-our-metrics/semantic-layer-4-build-metrics.md | 2 +- .../semantic-layer-5-refactor-a-mart.md | 2 +- .../semantic-layer-6-advanced-metrics.md | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md index 59bdc41a705..e50542a446c 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-1-intro.md @@ -2,7 +2,7 @@ title: "Intro to MetricFlow" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow -pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup" +pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-2-setup" pagination_prev: null --- diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md index 20643391a82..470445891dc 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-2-setup.md @@ -2,7 +2,7 @@ title: "Set up MetricFlow" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow -pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models" +pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models" --- ## Getting started diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md index 3c33c08874c..9c710b286ef 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-3-build-semantic-models.md @@ -2,7 +2,7 @@ title: "Building semantic models" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow -pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics" +pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics" --- ## How to build a semantic model diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md index 9f7849299b9..003eff9de40 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-4-build-metrics.md @@ -2,7 +2,7 @@ title: "Building metrics" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow -pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart" +pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart" --- ## How to build metrics diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md index 68b42ee6aa4..9ae80cbcd29 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-5-refactor-a-mart.md @@ -2,7 +2,7 @@ title: "Refactor an existing mart" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow -pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics" +pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics" --- ## A new approach diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md index 92ab444172a..e5c6e452dac 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-6-advanced-metrics.md @@ -2,7 +2,7 @@ title: "More advanced metrics" description: Getting started with the dbt and MetricFlow hoverSnippet: Learn how to get started with the dbt and MetricFlow -pagination_next: "docs/best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion" +pagination_next: "best-practices/how-we-build-our-metrics/semantic-layer-7-conclusion" --- ## More advanced metric types From 03ec70050b076a4cbcb3c58690b1c6b247129328 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 21 Dec 2023 16:49:37 -0500 Subject: [PATCH 137/204] Update website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 7491d17c039..598908dc921 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -1,5 +1,5 @@ --- -title: "dbt Semantic Layer and MetricFlow updates for December 2023" +title: "dbt Semantic Layer updates for December 2023" description: "December 2023: Enhanced Tableau integration, BIGINT support, LookML to MetricFlow conversion, and deprecation of legacy features." sidebar_label: "Update and fixes: dbt Semantic Layer and MetricFlow" sidebar_position: 08 From 31339d2dbd19d93d33a552691b6187812fdb3174 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 21 Dec 2023 16:49:52 -0500 Subject: [PATCH 138/204] Update website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 598908dc921..63ecd37fb4b 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -1,7 +1,7 @@ --- title: "dbt Semantic Layer updates for December 2023" description: "December 2023: Enhanced Tableau integration, BIGINT support, LookML to MetricFlow conversion, and deprecation of legacy features." -sidebar_label: "Update and fixes: dbt Semantic Layer and MetricFlow" +sidebar_label: "Update and fixes: dbt Semantic Layer" sidebar_position: 08 date: 2023-12-22 --- From 4f9e6c061010edf298a1eb631a0779ebb0b6e961 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 21 Dec 2023 16:50:04 -0500 Subject: [PATCH 139/204] Update website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 63ecd37fb4b..7213962b58f 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -5,7 +5,7 @@ sidebar_label: "Update and fixes: dbt Semantic Layer" sidebar_position: 08 date: 2023-12-22 --- -The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer and MetricFlow. +The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer. Refer to the following updates and fixes for December 2023: From 2811e4353e0d1a25e00f4c99a6f5616b7d4c3ebb Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 21 Dec 2023 16:50:16 -0500 Subject: [PATCH 140/204] Update website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md --- .../dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 7213962b58f..7bb44b44724 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -11,7 +11,7 @@ Refer to the following updates and fixes for December 2023: ## Bug fixes -The following are updates for the dbt Semantic Layer and MetricFlow: +The following are updates for the dbt Semantic Layer: **dbt Semantic Layer** From 0feec8522c89b80c327211a479c7b0c4bbc7fe6d Mon Sep 17 00:00:00 2001 From: Pat Kearns Date: Fri, 22 Dec 2023 15:56:30 +1100 Subject: [PATCH 141/204] Update website/docs/docs/cloud/connect-data-platform/connect-snowflake.md Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- .../docs/docs/cloud/connect-data-platform/connect-snowflake.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md index 34b69f56c27..68dbd2f8a42 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md @@ -42,7 +42,8 @@ alter user jsmith set rsa_public_key='MIIBIjANBgkqh...'; ``` 2. Finally, set the **Private Key** and **Private Key Passphrase** fields in the **Credentials** page to finish configuring dbt Cloud to authenticate with Snowflake using a key pair. - **Note:** Since dbt 0.16.0, unencrypted private keys are allowed. Only add the passphrase if necessary. + +**Note:** From dbt version 0.16.0 onwards, unencrypted private keys are permitted. Use a passphrase only if needed. Starting from dbt 1.5.0, you have the option to use a private_key string instead of a private_key_path. The private_key string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to Snowflake documentation for more info on how they generate the key. 4. To successfully fill in the Private Key field, you **must** include commented lines. If you're receiving a `Could not deserialize key data` or `JWT token` error, refer to [Troubleshooting](#troubleshooting) for more info. From 00c20182e7a46dd83ed7fa3268d4a97715f23d8b Mon Sep 17 00:00:00 2001 From: Pat Kearns Date: Fri, 22 Dec 2023 15:56:51 +1100 Subject: [PATCH 142/204] Update website/docs/docs/cloud/connect-data-platform/connect-snowflake.md Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- .../docs/docs/cloud/connect-data-platform/connect-snowflake.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md index 68dbd2f8a42..7dd4c86b59b 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md @@ -44,7 +44,8 @@ alter user jsmith set rsa_public_key='MIIBIjANBgkqh...'; 2. Finally, set the **Private Key** and **Private Key Passphrase** fields in the **Credentials** page to finish configuring dbt Cloud to authenticate with Snowflake using a key pair. **Note:** From dbt version 0.16.0 onwards, unencrypted private keys are permitted. Use a passphrase only if needed. - Starting from dbt 1.5.0, you have the option to use a private_key string instead of a private_key_path. The private_key string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to Snowflake documentation for more info on how they generate the key. +As of dbt version 1.5.0, you can use a `private_key` string in place of `private_key_path`. This `private_key` string can be either Base64-encoded DER format for the key bytes or plain-text PEM format. For more details on key generation, refer to the [Snowflake documentation](https://community.snowflake.com/s/article/How-to-configure-Snowflake-key-pair-authentication-fields-in-dbt-connection). + 4. To successfully fill in the Private Key field, you **must** include commented lines. If you're receiving a `Could not deserialize key data` or `JWT token` error, refer to [Troubleshooting](#troubleshooting) for more info. From ff9b70f70c493d50f650315cbe7758e0b789f40f Mon Sep 17 00:00:00 2001 From: Pat Kearns Date: Fri, 22 Dec 2023 15:57:08 +1100 Subject: [PATCH 143/204] Update website/docs/docs/core/connect-data-platform/snowflake-setup.md Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- website/docs/docs/core/connect-data-platform/snowflake-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/snowflake-setup.md b/website/docs/docs/core/connect-data-platform/snowflake-setup.md index d9d4aa6f3cb..8d42ec523f2 100644 --- a/website/docs/docs/core/connect-data-platform/snowflake-setup.md +++ b/website/docs/docs/core/connect-data-platform/snowflake-setup.md @@ -98,7 +98,7 @@ Along with adding the `authenticator` parameter, be sure to run `alter account s ### Key Pair Authentication -To use key pair authentication, omit a `password` and instead provide a `private_key_path` and, optionally, a `private_key_passphrase`. +To use key pair authentication, skip the `password` and provide a `private_key_path`. If needed, you can also add a `private_key_passphrase`. **Note:** Versions of dbt before 0.16.0 required that private keys were encrypted and a `private_key_passphrase` was provided. Since dbt 0.16.0, unencrypted private keys are allowed. Only add the passphrase if necessary. Starting from [dbt v1.5.0](/docs/dbt-versions/core), you have the option to use a `private_key` string instead of a `private_key_path`. The `private_key` string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to [Snowflake documentation](https://docs.snowflake.com/developer-guide/python-connector/python-connector-example#using-key-pair-authentication-key-pair-rotation) for more info on how they generate the key. From 868170e34fd0b3cf01399d18cd26dda6e6e2f389 Mon Sep 17 00:00:00 2001 From: Pat Kearns Date: Fri, 22 Dec 2023 15:57:22 +1100 Subject: [PATCH 144/204] Update website/docs/docs/core/connect-data-platform/snowflake-setup.md Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- website/docs/docs/core/connect-data-platform/snowflake-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/snowflake-setup.md b/website/docs/docs/core/connect-data-platform/snowflake-setup.md index 8d42ec523f2..ce021f8013c 100644 --- a/website/docs/docs/core/connect-data-platform/snowflake-setup.md +++ b/website/docs/docs/core/connect-data-platform/snowflake-setup.md @@ -99,7 +99,7 @@ Along with adding the `authenticator` parameter, be sure to run `alter account s ### Key Pair Authentication To use key pair authentication, skip the `password` and provide a `private_key_path`. If needed, you can also add a `private_key_passphrase`. -**Note:** Versions of dbt before 0.16.0 required that private keys were encrypted and a `private_key_passphrase` was provided. Since dbt 0.16.0, unencrypted private keys are allowed. Only add the passphrase if necessary. +**Note**: In dbt versions before 0.16.0, private keys needed encryption and a `private_key_passphrase`. From dbt version 0.16.0 onwards, unencrypted private keys are accepted, so add a passphrase only if necessary. Starting from [dbt v1.5.0](/docs/dbt-versions/core), you have the option to use a `private_key` string instead of a `private_key_path`. The `private_key` string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to [Snowflake documentation](https://docs.snowflake.com/developer-guide/python-connector/python-connector-example#using-key-pair-authentication-key-pair-rotation) for more info on how they generate the key. From f7dc8306ea5b936fa4d4d44b4a2164cf2200ee35 Mon Sep 17 00:00:00 2001 From: sachinthakur96 Date: Fri, 22 Dec 2023 13:01:02 +0530 Subject: [PATCH 145/204] Adding Oauth access --- website/docs/docs/core/connect-data-platform/vertica-setup.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/website/docs/docs/core/connect-data-platform/vertica-setup.md b/website/docs/docs/core/connect-data-platform/vertica-setup.md index 525e1be86fc..b7bb85a537b 100644 --- a/website/docs/docs/core/connect-data-platform/vertica-setup.md +++ b/website/docs/docs/core/connect-data-platform/vertica-setup.md @@ -68,10 +68,12 @@ your-profile: username: [your username] password: [your password] database: [database name] + oauth_access_token: [access token] schema: [dbt schema] connection_load_balance: True backup_server_node: [list of backup hostnames or IPs] retries: [1 or more] + threads: [1 or more] target: dev ``` @@ -92,6 +94,7 @@ your-profile: | username | The username to use to connect to the server. | Yes | None | dbadmin| password |The password to use for authenticating to the server. |Yes|None|my_password| database |The name of the database running on the server. |Yes | None | my_db | +| oauth_access_token | To authenticate via OAuth, provide an OAuth Access Token that authorizes a user to the database. | No | "" | Default: "" | schema| The schema to build models into.| No| None |VMart| connection_load_balance| A Boolean value that indicates whether the connection can be redirected to a host in the database other than host.| No| True |True| backup_server_node| List of hosts to connect to if the primary host specified in the connection (host, port) is unreachable. Each item in the list should be either a host string (using default port 5433) or a (host, port) tuple. A host can be a host name or an IP address.| No| None |['123.123.123.123','www.abc.com',('123.123.123.124',5433)]| From 30cebdc9b3509012587232311edd918f47fa74a9 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 08:01:03 -0500 Subject: [PATCH 146/204] Update keyboard-shortcuts.md add shift option click option per slack convo with user https://getdbt.slack.com/archives/C03SAHKKG2Z/p1703249576071509?thread_ts=1703191150.925439&cid=C03SAHKKG2Z --- .../docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md index 121cab68ce7..d00a5a7d939 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md @@ -13,14 +13,14 @@ Use this dbt Cloud IDE page to help you quickly reference some common operation |--------|----------------|------------------| | View a full list of editor shortcuts | Fn-F1 | Fn-F1 | | Select a file to open | Command-O | Control-O | -| Open the command palette to invoke dbt commands and actions | Command-P or Command-Shift-P | Control-P or Control-Shift-P | -| Multi-edit by selecting multiple lines | Option-click or Shift-Option-Command | Hold Alt and click | +| Close currently active editor tab | Option-W | Alt-W | | Preview code | Command-Enter | Control-Enter | | Compile code | Command-Shift-Enter | Control-Shift-Enter | -| Reveal a list of dbt functions | Enter two underscores `__` | Enter two underscores `__` | +| Reveal a list of dbt functions in the editor | Enter two underscores `__` | Enter two underscores `__` | +| Open the command palette to invoke dbt commands and actions | Command-P
Command-Shift-P | Control-P
Control-Shift-P | +| Multi-edit in the editor by selecting multiple lines | Option-Click
Shift-Option-Command
Shift-Option-Click | Hold Alt and Click | | Toggle open the [Invocation history drawer](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located on the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) | -| Add a block comment to selected code. SQL files will use the Jinja syntax `({# #})` rather than the SQL one `(/* */)`.

Markdown files will use the Markdown syntax `()` | Command-Option-/ | Control-Alt-/ | -| Close the currently active editor tab | Option-W | Alt-W | +| Add a block comment to the selected code. SQL files will use the Jinja syntax `({# #})` rather than the SQL one `(/* */)`.

Markdown files will use the Markdown syntax `()` | Command-Option-/ | Control-Alt-/ | ## Related docs From 2c47b81211b84381e0c710c4d945733f88da09bc Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 08:24:49 -0500 Subject: [PATCH 147/204] Update keyboard-shortcuts.md --- website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md index d00a5a7d939..9332a116de0 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md @@ -17,8 +17,8 @@ Use this dbt Cloud IDE page to help you quickly reference some common operation | Preview code | Command-Enter | Control-Enter | | Compile code | Command-Shift-Enter | Control-Shift-Enter | | Reveal a list of dbt functions in the editor | Enter two underscores `__` | Enter two underscores `__` | -| Open the command palette to invoke dbt commands and actions | Command-P
Command-Shift-P | Control-P
Control-Shift-P | -| Multi-edit in the editor by selecting multiple lines | Option-Click
Shift-Option-Command
Shift-Option-Click | Hold Alt and Click | +| Open the command palette to invoke dbt commands and actions | - Command-P
- Command-Shift-P | Control-P
Control-Shift-P | +| Multi-edit in the editor by selecting multiple lines | - Option-Click
- Shift-Option-Command
- Shift-Option-Click | Hold Alt and Click | | Toggle open the [Invocation history drawer](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located on the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) | | Add a block comment to the selected code. SQL files will use the Jinja syntax `({# #})` rather than the SQL one `(/* */)`.

Markdown files will use the Markdown syntax `()` | Command-Option-/ | Control-Alt-/ | From f48745c5e387b44ab151781c5b51defcafba6437 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 08:40:35 -0500 Subject: [PATCH 148/204] Update website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md --- website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md index 9332a116de0..daf417dc4cf 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md @@ -18,7 +18,7 @@ Use this dbt Cloud IDE page to help you quickly reference some common operation | Compile code | Command-Shift-Enter | Control-Shift-Enter | | Reveal a list of dbt functions in the editor | Enter two underscores `__` | Enter two underscores `__` | | Open the command palette to invoke dbt commands and actions | - Command-P
- Command-Shift-P | Control-P
Control-Shift-P | -| Multi-edit in the editor by selecting multiple lines | - Option-Click
- Shift-Option-Command
- Shift-Option-Click | Hold Alt and Click | +| Multi-edit in the editor by selecting multiple lines | Option-Click / Shift-Option-Command / Shift-Option-Click | Hold Alt and Click | | Toggle open the [Invocation history drawer](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located on the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) | | Add a block comment to the selected code. SQL files will use the Jinja syntax `({# #})` rather than the SQL one `(/* */)`.

Markdown files will use the Markdown syntax `()` | Command-Option-/ | Control-Alt-/ | From f87032929ada21fa0014914e7b860c7f736088ec Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 08:41:15 -0500 Subject: [PATCH 149/204] Update website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md --- website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md index daf417dc4cf..1e847e0a4f2 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md @@ -17,7 +17,7 @@ Use this dbt Cloud IDE page to help you quickly reference some common operation | Preview code | Command-Enter | Control-Enter | | Compile code | Command-Shift-Enter | Control-Shift-Enter | | Reveal a list of dbt functions in the editor | Enter two underscores `__` | Enter two underscores `__` | -| Open the command palette to invoke dbt commands and actions | - Command-P
- Command-Shift-P | Control-P
Control-Shift-P | +| Open the command palette to invoke dbt commands and actions | Command-P / Command-Shift-P | Control-P / Control-Shift-P | | Multi-edit in the editor by selecting multiple lines | Option-Click / Shift-Option-Command / Shift-Option-Click | Hold Alt and Click | | Toggle open the [Invocation history drawer](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located on the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) | | Add a block comment to the selected code. SQL files will use the Jinja syntax `({# #})` rather than the SQL one `(/* */)`.

Markdown files will use the Markdown syntax `()` | Command-Option-/ | Control-Alt-/ | From 51b5a4ce0db9fe450104e39ae5d934ac21eebea6 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 09:09:05 -0500 Subject: [PATCH 150/204] Update website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md --- website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md index 1e847e0a4f2..de456e52655 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md @@ -19,7 +19,7 @@ Use this dbt Cloud IDE page to help you quickly reference some common operation | Reveal a list of dbt functions in the editor | Enter two underscores `__` | Enter two underscores `__` | | Open the command palette to invoke dbt commands and actions | Command-P / Command-Shift-P | Control-P / Control-Shift-P | | Multi-edit in the editor by selecting multiple lines | Option-Click / Shift-Option-Command / Shift-Option-Click | Hold Alt and Click | -| Toggle open the [Invocation history drawer](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located on the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) | +| Open the [Invocation history drawer](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located at the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) | | Add a block comment to the selected code. SQL files will use the Jinja syntax `({# #})` rather than the SQL one `(/* */)`.

Markdown files will use the Markdown syntax `()` | Command-Option-/ | Control-Alt-/ | ## Related docs From 45e719247438a25a30d78287057b071308305012 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 09:16:06 -0500 Subject: [PATCH 151/204] Update website/docs/docs/core/connect-data-platform/vertica-setup.md --- website/docs/docs/core/connect-data-platform/vertica-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/vertica-setup.md b/website/docs/docs/core/connect-data-platform/vertica-setup.md index 113b71c05d1..8e499d68b3e 100644 --- a/website/docs/docs/core/connect-data-platform/vertica-setup.md +++ b/website/docs/docs/core/connect-data-platform/vertica-setup.md @@ -6,7 +6,7 @@ meta: authors: 'Vertica (Former authors: Matthew Carter, Andy Regan, Andrew Hedengren)' github_repo: 'vertica/dbt-vertica' pypi_package: 'dbt-vertica' - min_core_version: 'v1.7.0 and newer' + min_core_version: 'v1.7.0' cloud_support: 'Not Supported' min_supported_version: 'Vertica 23.4.0' slack_channel_name: 'n/a' From 13d1fcf10cbf7c81218a6fafaf7f11b146c25616 Mon Sep 17 00:00:00 2001 From: Amy Chen Date: Fri, 22 Dec 2023 09:18:07 -0500 Subject: [PATCH 152/204] update author --- website/blog/authors.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/authors.yml b/website/blog/authors.yml index 82cc300bdc8..a3548575b6e 100644 --- a/website/blog/authors.yml +++ b/website/blog/authors.yml @@ -1,6 +1,6 @@ amy_chen: image_url: /img/blog/authors/achen.png - job_title: Product Partnerships Manager + job_title: Product Ecosystem Manager links: - icon: fa-linkedin url: https://www.linkedin.com/in/yuanamychen/ From 28e9d4c3d394566e66b65c9ca46f93fc1a30f111 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 09:20:09 -0500 Subject: [PATCH 153/204] Update website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md --- website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md index de456e52655..61fe47a235a 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/keyboard-shortcuts.md @@ -19,7 +19,7 @@ Use this dbt Cloud IDE page to help you quickly reference some common operation | Reveal a list of dbt functions in the editor | Enter two underscores `__` | Enter two underscores `__` | | Open the command palette to invoke dbt commands and actions | Command-P / Command-Shift-P | Control-P / Control-Shift-P | | Multi-edit in the editor by selecting multiple lines | Option-Click / Shift-Option-Command / Shift-Option-Click | Hold Alt and Click | -| Open the [Invocation history drawer](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located at the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) | +| Open the [**Invocation History Drawer**](/docs/cloud/dbt-cloud-ide/ide-user-interface#invocation-history) located at the bottom of the IDE. | Control-backtick (or Control + `) | Control-backtick (or Ctrl + `) | | Add a block comment to the selected code. SQL files will use the Jinja syntax `({# #})` rather than the SQL one `(/* */)`.

Markdown files will use the Markdown syntax `()` | Command-Option-/ | Control-Alt-/ | ## Related docs From cc0ca46db3c792b9ecc5bdab96b380214eeba4cb Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 09:34:15 -0500 Subject: [PATCH 154/204] Update dec-sl-updates.md --- .../release-notes/74-Dec-2023/dec-sl-updates.md | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md index 7bb44b44724..401b43fb333 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/dec-sl-updates.md @@ -5,16 +5,10 @@ sidebar_label: "Update and fixes: dbt Semantic Layer" sidebar_position: 08 date: 2023-12-22 --- -The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer. - -Refer to the following updates and fixes for December 2023: +The dbt Labs team continues to work on adding new features, fixing bugs, and increasing reliability for the dbt Semantic Layer. The following list explains the updates and fixes for December 2023 in more detail. ## Bug fixes -The following are updates for the dbt Semantic Layer: - -**dbt Semantic Layer** - - Tableau integration — The dbt Semantic Layer integration with Tableau now supports queries that resolve to a "NOT IN" clause. This applies to using "exclude" in the filtering user interface. Previously it wasn’t supported. - `BIGINT` support — The dbt Semantic Layer can now support `BIGINT` values with precision greater than 18. Previously it would return an error. - Memory leak — Fixed a memory leak in the JDBC API that would previously lead to intermittent errors when querying it. From f5a2a3ed964378917d783f297eee0ae953f76101 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Fri, 22 Dec 2023 11:39:57 -0500 Subject: [PATCH 155/204] Update website/docs/docs/cloud/connect-data-platform/connect-snowflake.md --- .../docs/docs/cloud/connect-data-platform/connect-snowflake.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md index 7dd4c86b59b..eb6aba0c260 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md @@ -47,7 +47,7 @@ alter user jsmith set rsa_public_key='MIIBIjANBgkqh...'; As of dbt version 1.5.0, you can use a `private_key` string in place of `private_key_path`. This `private_key` string can be either Base64-encoded DER format for the key bytes or plain-text PEM format. For more details on key generation, refer to the [Snowflake documentation](https://community.snowflake.com/s/article/How-to-configure-Snowflake-key-pair-authentication-fields-in-dbt-connection). -4. To successfully fill in the Private Key field, you **must** include commented lines. If you're receiving a `Could not deserialize key data` or `JWT token` error, refer to [Troubleshooting](#troubleshooting) for more info. +4. To successfully fill in the Private Key field, you _must_ include commented lines. If you receive a `Could not deserialize key data` or `JWT token` error, refer to [Troubleshooting](#troubleshooting) for more info. **Example:** From eada5eed7173880c28aea88529bb31f5a8a908b0 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Fri, 22 Dec 2023 11:42:33 -0500 Subject: [PATCH 156/204] Update website/docs/docs/core/connect-data-platform/snowflake-setup.md --- website/docs/docs/core/connect-data-platform/snowflake-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/snowflake-setup.md b/website/docs/docs/core/connect-data-platform/snowflake-setup.md index ce021f8013c..2ab5e64e36a 100644 --- a/website/docs/docs/core/connect-data-platform/snowflake-setup.md +++ b/website/docs/docs/core/connect-data-platform/snowflake-setup.md @@ -99,7 +99,7 @@ Along with adding the `authenticator` parameter, be sure to run `alter account s ### Key Pair Authentication To use key pair authentication, skip the `password` and provide a `private_key_path`. If needed, you can also add a `private_key_passphrase`. -**Note**: In dbt versions before 0.16.0, private keys needed encryption and a `private_key_passphrase`. From dbt version 0.16.0 onwards, unencrypted private keys are accepted, so add a passphrase only if necessary. +**Note**: Unencrypted private keys are accepted, so add a passphrase only if necessary. Starting from [dbt v1.5.0](/docs/dbt-versions/core), you have the option to use a `private_key` string instead of a `private_key_path`. The `private_key` string should be in either Base64-encoded DER format, representing the key bytes, or a plain-text PEM format. Refer to [Snowflake documentation](https://docs.snowflake.com/developer-guide/python-connector/python-connector-example#using-key-pair-authentication-key-pair-rotation) for more info on how they generate the key. From bbf7027add290376b3b3a1323e7ab64bfa03f366 Mon Sep 17 00:00:00 2001 From: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Date: Fri, 22 Dec 2023 11:43:20 -0500 Subject: [PATCH 157/204] Update website/docs/docs/cloud/connect-data-platform/connect-snowflake.md --- .../docs/docs/cloud/connect-data-platform/connect-snowflake.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md index eb6aba0c260..c265529fb49 100644 --- a/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md +++ b/website/docs/docs/cloud/connect-data-platform/connect-snowflake.md @@ -43,7 +43,7 @@ alter user jsmith set rsa_public_key='MIIBIjANBgkqh...'; 2. Finally, set the **Private Key** and **Private Key Passphrase** fields in the **Credentials** page to finish configuring dbt Cloud to authenticate with Snowflake using a key pair. -**Note:** From dbt version 0.16.0 onwards, unencrypted private keys are permitted. Use a passphrase only if needed. +**Note:** Unencrypted private keys are permitted. Use a passphrase only if needed. As of dbt version 1.5.0, you can use a `private_key` string in place of `private_key_path`. This `private_key` string can be either Base64-encoded DER format for the key bytes or plain-text PEM format. For more details on key generation, refer to the [Snowflake documentation](https://community.snowflake.com/s/article/How-to-configure-Snowflake-key-pair-authentication-fields-in-dbt-connection). From aeb2fc955930768741600b4b003313c2741ff53a Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 12:28:32 -0500 Subject: [PATCH 158/204] Update website/docs/docs/deploy/job-scheduler.md --- website/docs/docs/deploy/job-scheduler.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/deploy/job-scheduler.md b/website/docs/docs/deploy/job-scheduler.md index df9cb09413e..7a4cd740804 100644 --- a/website/docs/docs/deploy/job-scheduler.md +++ b/website/docs/docs/deploy/job-scheduler.md @@ -31,7 +31,7 @@ Familiarize yourself with these useful terms to help you understand how the job | Over-scheduled job | A situation when a cron-scheduled job's run duration becomes longer than the frequency of the job’s schedule, resulting in a job queue that will grow faster than the scheduler can process the job’s runs. | | Prep time | The time dbt Cloud takes to create a short-lived environment to execute the job commands in the user's cloud data platform. Prep time varies most significantly at the top of the hour when the dbt Cloud Scheduler experiences a lot of run traffic. | | Run | A single, unique execution of a dbt job. | -| Run slot | Run slots control the number of jobs that can run concurrently. Developer plan has a fixed number of run slots, while Enterprise and Team users have [unlimited run slots](/docs/dbt-versions/release-notes/July-2023/faster-run#unlimited-job-concurrency-for-enterprise-accounts). Each running job occupies a run slot for the duration of the run.

Team and Developer plans include only one project each. For additional projects, consider upgrading to the [Enterprise plan](https://www.getdbt.com/pricing/).| +| Run slot | Run slots control the number of jobs that can run concurrently. Developer plans have a fixed number of run slots, while Enterprise and Team plans have [unlimited run slots](/docs/dbt-versions/release-notes/July-2023/faster-run#unlimited-job-concurrency-for-enterprise-accounts). Each running job occupies a run slot for the duration of the run.

Team and Developer plans are limited to one project each. For additional projects, consider upgrading to the [Enterprise plan](https://www.getdbt.com/pricing/).| | Threads | When dbt builds a project's DAG, it tries to parallelize the execution by using threads. The [thread](/docs/running-a-dbt-project/using-threads) count is the maximum number of paths through the DAG that dbt can work on simultaneously. The default thread count in a job is 4. | | Wait time | Amount of time that dbt Cloud waits before running a job, either because there are no available slots or because a previous run of the same job is still in progress. | From 6b7cfadbe9746803184546fda2ceb67261338436 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 12:30:09 -0500 Subject: [PATCH 159/204] Update website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md --- .../docs/dbt-versions/release-notes/79-July-2023/faster-run.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index 69ab76c6050..bf3be3dd6cb 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -33,7 +33,7 @@ Something to note, each running job occupies a run slot for its duration, and if For more feature details, refer to the [dbt Cloud pricing page](https://www.getdbt.com/pricing/). -- **Update December 2023: New Team plans with unlimited job concurrency**
+- **Update: New Team plans with unlimited job concurrency**
We've introduced a change to our dbt Cloud Scheduler for newly created Team plan accounts:

- Unlimited Job concurrency for new Team plans — New accounts on the Team plan now benefit from unlimited job concurrency. - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. From 7b6a88a686a50f5829495194e49c8ff99f6af1ee Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 12:43:56 -0500 Subject: [PATCH 160/204] Update website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md --- .../docs/dbt-versions/release-notes/79-July-2023/faster-run.md | 1 - 1 file changed, 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index bf3be3dd6cb..02a578a92c5 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -34,7 +34,6 @@ Something to note, each running job occupies a run slot for its duration, and if For more feature details, refer to the [dbt Cloud pricing page](https://www.getdbt.com/pricing/). - **Update: New Team plans with unlimited job concurrency**
- We've introduced a change to our dbt Cloud Scheduler for newly created Team plan accounts:

- Unlimited Job concurrency for new Team plans — New accounts on the Team plan now benefit from unlimited job concurrency. - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. From f1f8c93c704e1eda1bf56f9d3e8b7bc010ef5600 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 12:44:43 -0500 Subject: [PATCH 161/204] Update faster-run.md --- .../dbt-versions/release-notes/79-July-2023/faster-run.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index 02a578a92c5..a72d49ab1a9 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -34,6 +34,6 @@ Something to note, each running job occupies a run slot for its duration, and if For more feature details, refer to the [dbt Cloud pricing page](https://www.getdbt.com/pricing/). - **Update: New Team plans with unlimited job concurrency**
- - Unlimited Job concurrency for new Team plans — New accounts on the Team plan now benefit from unlimited job concurrency. - - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. - - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. +- Unlimited Job concurrency for new Team plans — New accounts on the Team plan now benefit from unlimited job concurrency. + - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. + - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. From 8d54fc94982da26c2a3abf2e8dfd99cfcd1877f0 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 12:45:13 -0500 Subject: [PATCH 162/204] Update website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md --- .../docs/dbt-versions/release-notes/79-July-2023/faster-run.md | 1 - 1 file changed, 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index a72d49ab1a9..fe07cebbeb5 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -34,6 +34,5 @@ Something to note, each running job occupies a run slot for its duration, and if For more feature details, refer to the [dbt Cloud pricing page](https://www.getdbt.com/pricing/). - **Update: New Team plans with unlimited job concurrency**
-- Unlimited Job concurrency for new Team plans — New accounts on the Team plan now benefit from unlimited job concurrency. - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. From 14330f6ecbb9600975ade9f6bebe72548e84f7fa Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 12:45:43 -0500 Subject: [PATCH 163/204] Update website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md --- .../docs/dbt-versions/release-notes/79-July-2023/faster-run.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index fe07cebbeb5..e06f91516a6 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -33,6 +33,6 @@ Something to note, each running job occupies a run slot for its duration, and if For more feature details, refer to the [dbt Cloud pricing page](https://www.getdbt.com/pricing/). -- **Update: New Team plans with unlimited job concurrency**
+Note, newly created Team accounts after xyz can benefit can now benefit from unlimited job concurrency: - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. From 7f5f575197ee9a0fe89ceb6cf87303e7b53977a6 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 12:46:20 -0500 Subject: [PATCH 164/204] Update faster-run.md --- .../dbt-versions/release-notes/79-July-2023/faster-run.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index e06f91516a6..ee6de2c785e 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -33,6 +33,6 @@ Something to note, each running job occupies a run slot for its duration, and if For more feature details, refer to the [dbt Cloud pricing page](https://www.getdbt.com/pricing/). -Note, newly created Team accounts after xyz can benefit can now benefit from unlimited job concurrency: - - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. - - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. +Note, newly created Team accounts after xyz can now benefit from unlimited job concurrency: +- Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. +- Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. From d4457938d8f3b5f5295fa7b30f306c9d4b4e1b0c Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 12:55:32 -0500 Subject: [PATCH 165/204] Update website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md --- .../docs/dbt-versions/release-notes/79-July-2023/faster-run.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index ee6de2c785e..05bc517ce76 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -33,6 +33,6 @@ Something to note, each running job occupies a run slot for its duration, and if For more feature details, refer to the [dbt Cloud pricing page](https://www.getdbt.com/pricing/). -Note, newly created Team accounts after xyz can now benefit from unlimited job concurrency: +Note, newly created Team accounts after July 2023 benefit from unlimited job concurrency: - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. From 19b107be6ca6118170121b02d248684b84e2f0bb Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 12:56:53 -0500 Subject: [PATCH 166/204] Update website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md --- .../docs/dbt-versions/release-notes/79-July-2023/faster-run.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index 05bc517ce76..b93dab551ba 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -35,4 +35,4 @@ For more feature details, refer to the [dbt Cloud pricing page](https://www.getd Note, newly created Team accounts after July 2023 benefit from unlimited job concurrency: - Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. -- Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) provides unlimited job concurrency and project capacity. +- Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) offers features such as audit logging, unlimited job concurrency and projects, and more. From 1ce8a8e5f0fe7a56f8e9498ccc101a2702f90331 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 15:13:48 -0500 Subject: [PATCH 167/204] Update faster-run.md fold in maggie's feedback --- .../dbt-versions/release-notes/79-July-2023/faster-run.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index b93dab551ba..fab173caf95 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -27,12 +27,12 @@ Jobs scheduled at the top of the hour used to take over 106 seconds to prepare b Our enhanced scheduler offers more durability and empowers users to run jobs effortlessly. -This means Enterprise, multi-tenant accounts can now enjoy the advantages of unlimited job concurrency. Previously limited to a fixed number of run slots, Enterprise accounts now have the freedom to operate without constraints. Single-tenant support will be coming soon. Team plan customers will continue to have only 2 run slots. +This means Enterprise, multi-tenant accounts can now enjoy the advantages of unlimited job concurrency. Previously limited to a fixed number of run slots, Enterprise accounts now have the freedom to operate without constraints. Single-tenant support will be coming soon. Something to note, each running job occupies a run slot for its duration, and if all slots are occupied, jobs will queue accordingly. For more feature details, refer to the [dbt Cloud pricing page](https://www.getdbt.com/pricing/). -Note, newly created Team accounts after July 2023 benefit from unlimited job concurrency: -- Existing Team plans — Existing Team plan accounts will continue to operate with their original fixed number of run slots. +Note, Team accounts created after July 2023 benefit from unlimited job concurrency: +- Legacy Team accounts have a fixed number of run slots. - Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) offers features such as audit logging, unlimited job concurrency and projects, and more. From c6b01b380075a47ea144a0507399932dc960cd99 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 22 Dec 2023 15:20:32 -0500 Subject: [PATCH 168/204] Update website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md --- .../docs/dbt-versions/release-notes/79-July-2023/faster-run.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md index fab173caf95..5cf1f97ff25 100644 --- a/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md +++ b/website/docs/docs/dbt-versions/release-notes/79-July-2023/faster-run.md @@ -35,4 +35,4 @@ For more feature details, refer to the [dbt Cloud pricing page](https://www.getd Note, Team accounts created after July 2023 benefit from unlimited job concurrency: - Legacy Team accounts have a fixed number of run slots. -- Project limitations — Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) offers features such as audit logging, unlimited job concurrency and projects, and more. +- Both Team and Developer plans are limited to one project each. For larger-scale needs, our [Enterprise plan](https://www.getdbt.com/pricing/) offers features such as audit logging, unlimited job concurrency and projects, and more. From b40d90c9eded56e8823681684f44c7f9c8fd4665 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Tue, 2 Jan 2024 10:03:46 -0500 Subject: [PATCH 169/204] update headers --- website/docs/best-practices/how-we-mesh/mesh-1-intro.md | 2 +- .../docs/best-practices/how-we-mesh/mesh-2-structures.md | 2 +- .../docs/docs/collaborate/govern/project-dependencies.md | 8 ++++---- .../dbt-versions/core-upgrade/01-upgrading-to-v1.6.md | 2 +- website/docs/faqs/Models/unique-model-names.md | 2 +- website/docs/reference/dbt-jinja-functions/ref.md | 3 ++- website/snippets/_packages_or_dependencies.md | 2 +- 7 files changed, 11 insertions(+), 10 deletions(-) diff --git a/website/docs/best-practices/how-we-mesh/mesh-1-intro.md b/website/docs/best-practices/how-we-mesh/mesh-1-intro.md index ba1660a8d82..0f27e64c447 100644 --- a/website/docs/best-practices/how-we-mesh/mesh-1-intro.md +++ b/website/docs/best-practices/how-we-mesh/mesh-1-intro.md @@ -12,7 +12,7 @@ Regardless of your organization's size and complexity, dbt should empower data t dbt Mesh is not a single product: it is a pattern enabled by a convergence of several features in dbt: -- **[Cross-project references](/docs/collaborate/govern/project-dependencies#how-to-use-ref)** - this is the foundational feature that enables the multi-project deployments. `{{ ref() }}`s now work across dbt Cloud projects on Enterprise plans. +- **[Cross-project references](/docs/collaborate/govern/project-dependencies#how-to-write-cross-project-ref)** - this is the foundational feature that enables the multi-project deployments. `{{ ref() }}`s now work across dbt Cloud projects on Enterprise plans. - **[dbt Explorer](/docs/collaborate/explore-projects)** - dbt Cloud's metadata-powered documentation platform, complete with full, cross-project lineage. - **Governance** - dbt's new governance features allow you to manage access to your dbt models both within and across projects. - **[Groups](/docs/collaborate/govern/model-access#groups)** - groups allow you to assign models to subsets within a project. diff --git a/website/docs/best-practices/how-we-mesh/mesh-2-structures.md b/website/docs/best-practices/how-we-mesh/mesh-2-structures.md index 9ab633c50ad..345ef22c62d 100644 --- a/website/docs/best-practices/how-we-mesh/mesh-2-structures.md +++ b/website/docs/best-practices/how-we-mesh/mesh-2-structures.md @@ -20,7 +20,7 @@ At a high level, you’ll need to decide: ### Cycle detection -Like resource dependencies, project dependencies are acyclic, meaning they only move in one direction. This prevents `ref` cycles (or loops), which lead to issues with your data workflows. For example, if project B depends on project A, a new model in project A could not import and use a public model from project B. Refer to [Project dependencies](/docs/collaborate/govern/project-dependencies#how-to-use-ref) for more information. +Like resource dependencies, project dependencies are acyclic, meaning they only move in one direction. This prevents `ref` cycles (or loops), which lead to issues with your data workflows. For example, if project B depends on project A, a new model in project A could not import and use a public model from project B. Refer to [Project dependencies](/docs/collaborate/govern/project-dependencies#how-to-write-cross-project-ref) for more information. ## Define your project interfaces by splitting your DAG diff --git a/website/docs/docs/collaborate/govern/project-dependencies.md b/website/docs/docs/collaborate/govern/project-dependencies.md index 569d69a87e6..7e15b101302 100644 --- a/website/docs/docs/collaborate/govern/project-dependencies.md +++ b/website/docs/docs/collaborate/govern/project-dependencies.md @@ -4,16 +4,16 @@ id: project-dependencies sidebar_label: "Project dependencies" description: "Reference public models across dbt projects" pagination_next: null +keyword: dbt mesh, project dependencies, ref, cross project ref, project dependencies --- :::info Available in Public Preview for dbt Cloud Enterprise accounts Project dependencies and cross-project `ref` are features available in [dbt Cloud Enterprise](https://www.getdbt.com/pricing), currently in [Public Preview](/docs/dbt-versions/product-lifecycles#dbt-cloud). -Enterprise users can use these features by designating a [public model](/docs/collaborate/govern/model-access) and adding a [cross-project ref](#how-to-use-ref). +Enterprise users can use these features by designating a [public model](/docs/collaborate/govern/model-access) and adding a [cross-project ref](#how-to-write-cross-project-ref). ::: - For a long time, dbt has supported code reuse and extension by installing other projects as [packages](/docs/build/packages). When you install another project as a package, you are pulling in its full source code, and adding it to your own. This enables you to call macros and run models defined in that other project. While this is a great way to reuse code, share utility macros, and establish a starting point for common transformations, it's not a great way to enable collaboration across teams and at scale, especially at larger organizations. @@ -80,9 +80,9 @@ When you're building on top of another team's work, resolving the references in - You don't need to mirror any conditional configuration of the upstream project such as `vars`, environment variables, or `target.name`. You can reference them directly wherever the Finance team is building their models in production. Even if the Finance team makes changes like renaming the model, changing the name of its schema, or [bumping its version](/docs/collaborate/govern/model-versions), your `ref` would still resolve successfully. - You eliminate the risk of accidentally building those models with `dbt run` or `dbt build`. While you can select those models, you can't actually build them. This prevents unexpected warehouse costs and permissions issues. This also ensures proper ownership and cost allocation for each team's models. -### How to use ref +### How to write cross-project ref -**Writing `ref`:** Models referenced from a `project`-type dependency must use [two-argument `ref`](/reference/dbt-jinja-functions/ref#two-argument-variant), including the project name: +**Writing `ref`:** Models referenced from a `project`-type dependency must use [two-argument `ref`](/reference/dbt-jinja-functions/ref#ref-project-specific-models), including the project name: diff --git a/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md b/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md index 33a038baa9b..f1f7a77e1e1 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md +++ b/website/docs/docs/dbt-versions/core-upgrade/01-upgrading-to-v1.6.md @@ -79,7 +79,7 @@ Support for BigQuery coming soon. [**Deprecation date**](/reference/resource-properties/deprecation_date): Models can declare a deprecation date that will warn model producers and downstream consumers. This enables clear migration windows for versioned models, and provides a mechanism to facilitate removal of immature or little-used models, helping to avoid project bloat. -[Model names](/faqs/Models/unique-model-names) can be duplicated across different namespaces (projects/packages), so long as they are unique within each project/package. We strongly encourage using [two-argument `ref`](/reference/dbt-jinja-functions/ref#two-argument-variant) when referencing a model from a different package/project. +[Model names](/faqs/Models/unique-model-names) can be duplicated across different namespaces (projects/packages), so long as they are unique within each project/package. We strongly encourage using [two-argument `ref`](/reference/dbt-jinja-functions/ref#ref-project-specific-models) when referencing a model from a different package/project. More consistency and flexibility around packages. Resources defined in a package will respect variable and global macro definitions within the scope of that package. - `vars` defined in a package's `dbt_project.yml` are now available in the resolution order when compiling nodes in that package, though CLI `--vars` and the root project's `vars` will still take precedence. See ["Variable Precedence"](/docs/build/project-variables#variable-precedence) for details. diff --git a/website/docs/faqs/Models/unique-model-names.md b/website/docs/faqs/Models/unique-model-names.md index c721fca7c6e..7878a5a704c 100644 --- a/website/docs/faqs/Models/unique-model-names.md +++ b/website/docs/faqs/Models/unique-model-names.md @@ -10,7 +10,7 @@ id: unique-model-names Within one project: yes! To build dependencies between models, you need to use the `ref` function, and pass in the model name as an argument. dbt uses that model name to uniquely resolve the `ref` to a specific model. As a result, these model names need to be unique, _even if they are in distinct folders_. -A model in one project can have the same name as a model in another project (installed as a dependency). dbt uses the project name to uniquely identify each model. We call this "namespacing." If you `ref` a model with a duplicated name, it will resolve to the model within the same namespace (package or project), or raise an error because of an ambiguous reference. Use [two-argument `ref`](/reference/dbt-jinja-functions/ref#two-argument-variant) to disambiguate references by specifying the namespace. +A model in one project can have the same name as a model in another project (installed as a dependency). dbt uses the project name to uniquely identify each model. We call this "namespacing." If you `ref` a model with a duplicated name, it will resolve to the model within the same namespace (package or project), or raise an error because of an ambiguous reference. Use [two-argument `ref`](/reference/dbt-jinja-functions/ref#ref-project-specific-models) to disambiguate references by specifying the namespace. Those models will still need to land in distinct locations in the data warehouse. Read the docs on [custom aliases](/docs/build/custom-aliases) and [custom schemas](/docs/build/custom-schemas) for details on how to achieve this. diff --git a/website/docs/reference/dbt-jinja-functions/ref.md b/website/docs/reference/dbt-jinja-functions/ref.md index fda5992e234..87301a3bc63 100644 --- a/website/docs/reference/dbt-jinja-functions/ref.md +++ b/website/docs/reference/dbt-jinja-functions/ref.md @@ -3,6 +3,7 @@ title: "About ref function" sidebar_label: "ref" id: "ref" description: "Read this guide to understand the builtins Jinja function in dbt." +keyword: dbt mesh, project dependencies, ref, cross project ref, project dependencies --- The most important function in dbt is `ref()`; it's impossible to build even moderately complex models without it. `ref()` is how you reference one model within another. This is a very common behavior, as typically models are built to be "stacked" on top of one another. Here is how this looks in practice: @@ -68,7 +69,7 @@ select * from {{ ref('model_name', version=1) }} select * from {{ ref('model_name') }} ``` -### Two-argument variant +### Ref project-specific models You can also use a two-argument variant of the `ref` function. With this variant, you can pass both a namespace (project or package) and model name to `ref` to avoid ambiguity. When using two arguments with projects (not packages), you also need to set [cross project dependencies](/docs/collaborate/govern/project-dependencies). diff --git a/website/snippets/_packages_or_dependencies.md b/website/snippets/_packages_or_dependencies.md index 5cc4c67e63c..61014bc2b1a 100644 --- a/website/snippets/_packages_or_dependencies.md +++ b/website/snippets/_packages_or_dependencies.md @@ -12,7 +12,7 @@ There are some important differences between Package dependencies and Project de -Project dependencies are designed for the [dbt Mesh](/best-practices/how-we-mesh/mesh-1-intro) and [cross-project reference](/docs/collaborate/govern/project-dependencies#how-to-use-ref) workflow: +Project dependencies are designed for the [dbt Mesh](/best-practices/how-we-mesh/mesh-1-intro) and [cross-project reference](/docs/collaborate/govern/project-dependencies#how-to-write-cross-project-ref) workflow: - Use `dependencies.yml` when you need to set up cross-project references between different dbt projects, especially in a dbt Mesh setup. - Use `dependencies.yml` when you want to include both projects and non-private dbt packages in your project's dependencies. From 92b5941981c2b27c1d41ca41d085e62773425a15 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 2 Jan 2024 10:50:25 -0500 Subject: [PATCH 170/204] Update ref.md fold jeremy's feedback without changing header --- website/docs/reference/dbt-jinja-functions/ref.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/website/docs/reference/dbt-jinja-functions/ref.md b/website/docs/reference/dbt-jinja-functions/ref.md index 87301a3bc63..7a1893f38d4 100644 --- a/website/docs/reference/dbt-jinja-functions/ref.md +++ b/website/docs/reference/dbt-jinja-functions/ref.md @@ -71,13 +71,17 @@ select * from {{ ref('model_name') }} ### Ref project-specific models -You can also use a two-argument variant of the `ref` function. With this variant, you can pass both a namespace (project or package) and model name to `ref` to avoid ambiguity. When using two arguments with projects (not packages), you also need to set [cross project dependencies](/docs/collaborate/govern/project-dependencies). +You can also reference models from different projects using the two-argument variant of the `ref` function. By specifying both a namespace (which could be a project or package) and model name, you ensure clarity and avoid any ambiguity in the `ref`. This is also useful when dealing with models across various projects or packages. + +When using two arguments with projects (not packages), you also need to set [cross project dependencies](/docs/collaborate/govern/project-dependencies). + +The following syntax demonstrates how to reference a model from a specific project or package: ```sql select * from {{ ref('project_or_package', 'model_name') }} ``` -We recommend using two-argument `ref` any time you are referencing a model defined in a different package or project. While not required in all cases, it's more explicit for you, for dbt, and for future readers of your code. +We recommend using two-argument `ref` any time you are referencing a model defined in a different package or project. While not required in all cases, it's more explicit for you, for dbt, and future readers of your code. From 17d4412d8fbaaafd155a7324081a7822fa07d440 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 2 Jan 2024 10:50:41 -0500 Subject: [PATCH 171/204] Update ref.md --- website/docs/reference/dbt-jinja-functions/ref.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/dbt-jinja-functions/ref.md b/website/docs/reference/dbt-jinja-functions/ref.md index 7a1893f38d4..bc1f3f1ba9e 100644 --- a/website/docs/reference/dbt-jinja-functions/ref.md +++ b/website/docs/reference/dbt-jinja-functions/ref.md @@ -71,7 +71,7 @@ select * from {{ ref('model_name') }} ### Ref project-specific models -You can also reference models from different projects using the two-argument variant of the `ref` function. By specifying both a namespace (which could be a project or package) and model name, you ensure clarity and avoid any ambiguity in the `ref`. This is also useful when dealing with models across various projects or packages. +You can also reference models from different projects using the two-argument variant of the `ref` function. By specifying both a namespace (which could be a project or package) and a model name, you ensure clarity and avoid any ambiguity in the `ref`. This is also useful when dealing with models across various projects or packages. When using two arguments with projects (not packages), you also need to set [cross project dependencies](/docs/collaborate/govern/project-dependencies). From 7b6f19a9d564e8af4341fea78712d87b0bafa2ff Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 2 Jan 2024 11:03:00 -0500 Subject: [PATCH 172/204] Update website/docs/docs/collaborate/govern/project-dependencies.md --- website/docs/docs/collaborate/govern/project-dependencies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/collaborate/govern/project-dependencies.md b/website/docs/docs/collaborate/govern/project-dependencies.md index 7e15b101302..d0f19f96b7c 100644 --- a/website/docs/docs/collaborate/govern/project-dependencies.md +++ b/website/docs/docs/collaborate/govern/project-dependencies.md @@ -11,7 +11,7 @@ keyword: dbt mesh, project dependencies, ref, cross project ref, project depende Project dependencies and cross-project `ref` are features available in [dbt Cloud Enterprise](https://www.getdbt.com/pricing), currently in [Public Preview](/docs/dbt-versions/product-lifecycles#dbt-cloud). -Enterprise users can use these features by designating a [public model](/docs/collaborate/govern/model-access) and adding a [cross-project ref](#how-to-write-cross-project-ref). +[Enterprise users](https://www.getdbt.com/pricing) can use these features by designating a [public model](/docs/collaborate/govern/model-access) and adding a [cross-project ref](#how-to-write-cross-project-ref). ::: For a long time, dbt has supported code reuse and extension by installing other projects as [packages](/docs/build/packages). When you install another project as a package, you are pulling in its full source code, and adding it to your own. This enables you to call macros and run models defined in that other project. From fc1aaaa9a4a04e999ced3c707add857295bfc968 Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Tue, 2 Jan 2024 10:25:36 -0800 Subject: [PATCH 173/204] Update website/docs/reference/parsing.md Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> --- website/docs/reference/parsing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/parsing.md b/website/docs/reference/parsing.md index 8205f93d013..c845beea05d 100644 --- a/website/docs/reference/parsing.md +++ b/website/docs/reference/parsing.md @@ -41,7 +41,7 @@ The [`PARTIAL_PARSE` global config](/reference/global-configs/parsing) can be en Parse-time attributes (dependencies, configs, and resource properties) are resolved using the parse-time context. When partial parsing is enabled, and certain context variables change, those attributes will _not_ be re-resolved, and are likely to become stale. -In particular, you may see **incorrect results** if these attributes depend on "volatile" context variables, such as [`run_started_at`](/reference/dbt-jinja-functions/run_started_at), [`invocation_id`](/reference/dbt-jinja-functions/invocation_id), or [flags](/reference/dbt-jinja-functions/flags). These variables are likely (or even guaranteed!) to change in each invocation. dbt Labs _strongly discourages_ you from using these variables to set parse-time attributes (dependencies, configs, and resource properties). +In particular, you may see incorrect results if these attributes depend on "volatile" context variables, such as [`run_started_at`](/reference/dbt-jinja-functions/run_started_at), [`invocation_id`](/reference/dbt-jinja-functions/invocation_id), or [flags](/reference/dbt-jinja-functions/flags). These variables are likely (or even guaranteed!) to change in each invocation. dbt Labs _strongly discourages_ you from using these variables to set parse-time attributes (dependencies, configs, and resource properties). Starting in v1.0, dbt _will_ detect changes in environment variables. It will selectively re-parse only the files that depend on that [`env_var`](/reference/dbt-jinja-functions/env_var) value. (If the env var is used in `profiles.yml` or `dbt_project.yml`, a full re-parse is needed.) However, dbt will _not_ re-render **descriptions** that include env vars. If your descriptions include frequently changing env vars (this is highly uncommon), we recommend that you fully re-parse when generating documentation: `dbt --no-partial-parse docs generate`. From 0a14d2e68a5a03d856795d3473dc1880dbba0ab2 Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Tue, 2 Jan 2024 10:26:41 -0800 Subject: [PATCH 174/204] Update website/docs/reference/parsing.md Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> --- website/docs/reference/parsing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/parsing.md b/website/docs/reference/parsing.md index c845beea05d..6eed4c96af0 100644 --- a/website/docs/reference/parsing.md +++ b/website/docs/reference/parsing.md @@ -53,7 +53,7 @@ If certain inputs change between runs, dbt will trigger a full re-parse. The res - dbt version - certain widely-used macros (for example, [builtins](/reference/dbt-jinja-functions/builtins), overrides, or `generate_x_name` for `database`/`schema`/`alias`) -If you're triggering [CI](/docs/deploy/continuous-integration) job runs, the partial parsing benefits are not available on a new pull request (PR) or new branch. However, they are available on subsequent commits to that new PR or branch. +If you're triggering [CI](/docs/deploy/continuous-integration) job runs, the benefits of partial parsing are not applicable to new pull requests (PR) or new branches. However, they are applied on subsequent commits to the new PR or branch. If you ever get into a bad state, you can disable partial parsing and trigger a full re-parse by setting the `PARTIAL_PARSE` global config to false, or by deleting `target/partial_parse.msgpack` (e.g. by running `dbt clean`). From cb418f9d36fede27971959248ea0725a2ff5483b Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Tue, 2 Jan 2024 10:27:48 -0800 Subject: [PATCH 175/204] Update website/snippets/_cloud-environments-info.md Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> --- website/snippets/_cloud-environments-info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/snippets/_cloud-environments-info.md b/website/snippets/_cloud-environments-info.md index 855a8c66ab0..01ea41936ce 100644 --- a/website/snippets/_cloud-environments-info.md +++ b/website/snippets/_cloud-environments-info.md @@ -96,7 +96,7 @@ This feature is only available on the dbt Cloud Enterprise plan. At the start of every dbt invocation, dbt reads all the files in your project, extracts information, and constructs an internal manifest containing every object (model, source, macro, and so on). Among other things, it uses the `ref()`, `source()`, and `config()` macro calls within models to set properties, infer dependencies, and construct your project's DAG. When dbt finishes parsing your project, it stores the internal manifest in a file called `partial_parse.msgpack`. -Parsing projects can be time-consuming, especially for large projects (for example, a project with hundreds of models and thousands of files). To reduce the time it takes dbt to parse your project, use the partial parsing feature in dbt Cloud for your environment. When enabled, dbt Cloud uses the `partial_parse.msgpack` file to determine which files have changed (if any) since the project was last parsed. Then, instead of parsing all project files, it _only_ parses the changed files or the files related to those changes. +Parsing projects can be time-consuming, especially for large projects with hundreds of models and thousands of files. To reduce the time it takes dbt to parse your project, use the partial parsing feature in dbt Cloud for your environment. When enabled, dbt Cloud uses the `partial_parse.msgpack` file to determine which files have changed (if any) since the project was last parsed, and then it parses _only_ the changed files and the files related to those changes. Partial parsing in dbt Cloud requires dbt version 1.4 or newer. The feature does have some known limitations. Refer to [Known limitations](/reference/parsing#known-limitations) to learn more about them. From 2221548f107578c7c86ffe22416e75ca97a463bf Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Tue, 2 Jan 2024 10:32:25 -0800 Subject: [PATCH 176/204] Feedback --- .../{74-Dec-2023 => 73-Jan-2024}/partial-parsing.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) rename website/docs/docs/dbt-versions/release-notes/{74-Dec-2023 => 73-Jan-2024}/partial-parsing.md (92%) diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/partial-parsing.md b/website/docs/docs/dbt-versions/release-notes/73-Jan-2024/partial-parsing.md similarity index 92% rename from website/docs/docs/dbt-versions/release-notes/74-Dec-2023/partial-parsing.md rename to website/docs/docs/dbt-versions/release-notes/73-Jan-2024/partial-parsing.md index eb224d5b845..c0236a30783 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/partial-parsing.md +++ b/website/docs/docs/dbt-versions/release-notes/73-Jan-2024/partial-parsing.md @@ -3,11 +3,11 @@ title: "New: Native support for partial parsing" description: "December 2023: For faster run times with your dbt invocations, configure dbt Cloud to parse only the changed files in your project." sidebar_label: "New: Native support for partial parsing" sidebar_position: 09 -tags: [Dec-2023] -date: 2023-12-14 +tags: [Jan-2024] +date: 2024-01-03 --- -By default, dbt parses all the files in your project at the beginning of every dbt invocation. Depending on the size of your project, this operation can take a long time to complete. With the new partial parsing feature in dbt Cloud, you can reduce the time it takes for dbt to parse your project. When enabled, dbt Cloud parses only the changed files in your project instead of parsing all the project files. As a result, your dbt invocations will take significantly less time to run. +By default, dbt parses all the files in your project at the beginning of every dbt invocation. Depending on the size of your project, this operation can take a long time to complete. With the new partial parsing feature in dbt Cloud, you can reduce the time it takes for dbt to parse your project. When enabled, dbt Cloud parses only the changed files in your project instead of parsing all the project files. As a result, your dbt invocations will take less time to run. To learn more, refer to [Partial parsing](/docs/deploy/deploy-environments#partial-parsing). From 8307f4b43c1026dc9ffbea5785abd1ea3d6cc95e Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Tue, 2 Jan 2024 12:24:12 -0800 Subject: [PATCH 177/204] Fix link --- website/docs/guides/custom-cicd-pipelines.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/guides/custom-cicd-pipelines.md b/website/docs/guides/custom-cicd-pipelines.md index bd6d7617623..39361f2e049 100644 --- a/website/docs/guides/custom-cicd-pipelines.md +++ b/website/docs/guides/custom-cicd-pipelines.md @@ -511,7 +511,7 @@ This section is only for those projects that connect to their git repository usi ::: -The setup for this pipeline will use the same steps as the prior page. Before moving on, **follow steps 1-5 from the [prior page](https://docs.getdbt.com/guides/orchestration/custom-cicd-pipelines/3-dbt-cloud-job-on-merge)** +The setup for this pipeline will use the same steps as the prior page. Before moving on, follow steps 1-5 from the [prior page](/guides/custom-cicd-pipelines?step=2). ### 1. Create a pipeline job that runs when PRs are created From 36843e83af6b0882b3f7596b2bbe54afaa80441c Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Tue, 2 Jan 2024 15:24:55 -0500 Subject: [PATCH 178/204] add note about ref --- .../docs/dbt-cloud-apis/sl-api-overview.md | 2 +- .../docs/docs/dbt-cloud-apis/sl-graphql.md | 2 ++ website/docs/docs/dbt-cloud-apis/sl-jdbc.md | 2 ++ website/docs/guides/sl-migration.md | 19 ++++++++++++------- 4 files changed, 17 insertions(+), 8 deletions(-) diff --git a/website/docs/docs/dbt-cloud-apis/sl-api-overview.md b/website/docs/docs/dbt-cloud-apis/sl-api-overview.md index 6644d3e4b8b..0ddbc6888db 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-api-overview.md +++ b/website/docs/docs/dbt-cloud-apis/sl-api-overview.md @@ -15,7 +15,7 @@ import DeprecationNotice from '/snippets/_sl-deprecation-notice.md'; -The rapid growth of different tools in the modern data stack has helped data professionals address the diverse needs of different teams. The downside of this growth is the fragmentation of business logic across teams, tools, and workloads. +The rapid growth of different tools in the modern data stack has helped data professionals address the diverse needs of different teams. The downside of this growth is the fragmentation of business logic across teams, tools, and workloads.
The [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) allows you to define metrics in code (with [MetricFlow](/docs/build/about-metricflow)) and dynamically generate and query datasets in downstream tools based on their dbt governed assets, such as metrics and models. Integrating with the dbt Semantic Layer will help organizations that use your product make more efficient and trustworthy decisions with their data. It also helps you to avoid duplicative coding, optimize development workflow, ensure data governance, and guarantee consistency for data consumers. diff --git a/website/docs/docs/dbt-cloud-apis/sl-graphql.md b/website/docs/docs/dbt-cloud-apis/sl-graphql.md index 3555b211f4f..0898d75762a 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-graphql.md +++ b/website/docs/docs/dbt-cloud-apis/sl-graphql.md @@ -26,6 +26,8 @@ The dbt Semantic Layer GraphQL API allows you to explore and query metrics and d dbt Partners can use the Semantic Layer GraphQL API to build an integration with the dbt Semantic Layer. +Note, the dbt Semantic Layer API doesn't support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. + ## Requirements to use the GraphQL API - A dbt Cloud project on dbt v1.6 or higher - Metrics are defined and configured diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md index 345be39635e..61ba6bc63c8 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md +++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md @@ -33,6 +33,8 @@ You *may* be able to use our JDBC API with tools that do not have an official in Refer to [Get started with the dbt Semantic Layer](/docs/use-dbt-semantic-layer/quickstart-sl) for more info. +Note, the dbt Semantic Layer API doesn't support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. + ## Authentication dbt Cloud authorizes requests to the dbt Semantic Layer API. You need to provide an environment ID, host, and [service account tokens](/docs/dbt-cloud-apis/service-tokens). diff --git a/website/docs/guides/sl-migration.md b/website/docs/guides/sl-migration.md index 8ede40a6a2d..184f66ec731 100644 --- a/website/docs/guides/sl-migration.md +++ b/website/docs/guides/sl-migration.md @@ -25,21 +25,26 @@ dbt Labs recommends completing these steps in a local dev environment (such as t 1. Create new Semantic Model configs as YAML files in your dbt project.* 1. Upgrade the metrics configs in your project to the new spec.* 1. Delete your old metrics file or remove the `.yml` file extension so they're ignored at parse time. Remove the `dbt-metrics` package from your project. Remove any macros that reference `dbt-metrics`, like `metrics.calculate()`. Make sure that any packages you’re using don't have references to the old metrics spec. -1. Install the CLI with `python -m pip install "dbt-metricflow[your_adapter_name]"`. For example: +1. Install the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) to run MetricFlow commands and define your semantic model configurations. + - If you're using dbt Core, you can install the [MetricFlow CLI](/docs/build/metricflow-commands) with `python -m pip install "dbt-metricflow[your_adapter_name]"`. For example: ```bash python -m pip install "dbt-metricflow[snowflake]" ``` - **Note** - The MetricFlow CLI is not available in the IDE at this time. Support is coming soon. + **Note** - MetricFlow commands aren't yet supported in the dbt CLoud IDE at this time. -1. Run `dbt parse`. This parses your project and creates a `semantic_manifest.json` file in your target directory. MetricFlow needs this file to query metrics. If you make changes to your configs, you will need to parse your project again. -1. Run `mf list metrics` to view the metrics in your project. -1. Test querying a metric by running `mf query --metrics --group-by `. For example: +2. Run `dbt parse`. This parses your project and creates a `semantic_manifest.json` file in your target directory. MetricFlow needs this file to query metrics. If you make changes to your configs, you will need to parse your project again. +3. Run `mf list metrics` to view the metrics in your project. +4. Test querying a metric by running `mf query --metrics --group-by `. For example: ```bash mf query --metrics revenue --group-by metric_time ``` -1. Run `mf validate-configs` to run semantic and warehouse validations. This ensures your configs are valid and the underlying objects exist in your warehouse. -1. Push these changes to a new branch in your repo. +5. Run `mf validate-configs` to run semantic and warehouse validations. This ensures your configs are valid and the underlying objects exist in your warehouse. +6. Push these changes to a new branch in your repo. + +:::info `ref` not supported +The dbt Semantic Layer API doesn't support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. +::: **To make this process easier, dbt Labs provides a [custom migration tool](https://github.com/dbt-labs/dbt-converter) that automates these steps for you. You can find installation instructions in the [README](https://github.com/dbt-labs/dbt-converter/blob/master/README.md). Derived metrics aren’t supported in the migration tool, and will have to be migrated manually.* From 42ea5c9d1678ba8a289fc3bf2fda915ec596f575 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Tue, 2 Jan 2024 15:28:38 -0500 Subject: [PATCH 179/204] add why --- website/docs/guides/sl-migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/guides/sl-migration.md b/website/docs/guides/sl-migration.md index 184f66ec731..69c964882f1 100644 --- a/website/docs/guides/sl-migration.md +++ b/website/docs/guides/sl-migration.md @@ -43,7 +43,7 @@ dbt Labs recommends completing these steps in a local dev environment (such as t 6. Push these changes to a new branch in your repo. :::info `ref` not supported -The dbt Semantic Layer API doesn't support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. +The dbt Semantic Layer API doesn't support `ref` to call dbt objects. This is currently due to differences in architecture between the legacy Semantic Layer and the re-released Semantic Layer. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. ::: **To make this process easier, dbt Labs provides a [custom migration tool](https://github.com/dbt-labs/dbt-converter) that automates these steps for you. You can find installation instructions in the [README](https://github.com/dbt-labs/dbt-converter/blob/master/README.md). Derived metrics aren’t supported in the migration tool, and will have to be migrated manually.* From 255fadc40f06c136ba48a674da6759f550cb3733 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Tue, 2 Jan 2024 15:29:33 -0500 Subject: [PATCH 180/204] add currently --- website/docs/docs/dbt-cloud-apis/sl-graphql.md | 2 +- website/docs/docs/dbt-cloud-apis/sl-jdbc.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/dbt-cloud-apis/sl-graphql.md b/website/docs/docs/dbt-cloud-apis/sl-graphql.md index 0898d75762a..6a92e11d7c0 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-graphql.md +++ b/website/docs/docs/dbt-cloud-apis/sl-graphql.md @@ -26,7 +26,7 @@ The dbt Semantic Layer GraphQL API allows you to explore and query metrics and d dbt Partners can use the Semantic Layer GraphQL API to build an integration with the dbt Semantic Layer. -Note, the dbt Semantic Layer API doesn't support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. +Note, the dbt Semantic Layer API doesn't currently support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. ## Requirements to use the GraphQL API - A dbt Cloud project on dbt v1.6 or higher diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md index 61ba6bc63c8..6ec916b7719 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md +++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md @@ -33,7 +33,7 @@ You *may* be able to use our JDBC API with tools that do not have an official in Refer to [Get started with the dbt Semantic Layer](/docs/use-dbt-semantic-layer/quickstart-sl) for more info. -Note, the dbt Semantic Layer API doesn't support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. +Note, the dbt Semantic Layer API doesn't currently support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. ## Authentication From 3e1ee9834e51c1170e8828efdbfaf6b3d333d55c Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 2 Jan 2024 15:32:01 -0500 Subject: [PATCH 181/204] Update website/docs/docs/collaborate/govern/project-dependencies.md --- website/docs/docs/collaborate/govern/project-dependencies.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/collaborate/govern/project-dependencies.md b/website/docs/docs/collaborate/govern/project-dependencies.md index d0f19f96b7c..80dee650698 100644 --- a/website/docs/docs/collaborate/govern/project-dependencies.md +++ b/website/docs/docs/collaborate/govern/project-dependencies.md @@ -11,7 +11,7 @@ keyword: dbt mesh, project dependencies, ref, cross project ref, project depende Project dependencies and cross-project `ref` are features available in [dbt Cloud Enterprise](https://www.getdbt.com/pricing), currently in [Public Preview](/docs/dbt-versions/product-lifecycles#dbt-cloud). -[Enterprise users](https://www.getdbt.com/pricing) can use these features by designating a [public model](/docs/collaborate/govern/model-access) and adding a [cross-project ref](#how-to-write-cross-project-ref). +If you have an [Enterprise account](https://www.getdbt.com/pricing), you can unlock these features by designating a [public model](/docs/collaborate/govern/model-access) and adding a [cross-project ref](#how-to-write-cross-project-ref). ::: For a long time, dbt has supported code reuse and extension by installing other projects as [packages](/docs/build/packages). When you install another project as a package, you are pulling in its full source code, and adding it to your own. This enables you to call macros and run models defined in that other project. From 0ee4eddbe319be72097379420a86f7fad9e8c3fb Mon Sep 17 00:00:00 2001 From: Ly Nguyen Date: Tue, 2 Jan 2024 14:01:47 -0800 Subject: [PATCH 182/204] full URL --- website/docs/guides/custom-cicd-pipelines.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/guides/custom-cicd-pipelines.md b/website/docs/guides/custom-cicd-pipelines.md index 39361f2e049..1778098f752 100644 --- a/website/docs/guides/custom-cicd-pipelines.md +++ b/website/docs/guides/custom-cicd-pipelines.md @@ -511,7 +511,7 @@ This section is only for those projects that connect to their git repository usi ::: -The setup for this pipeline will use the same steps as the prior page. Before moving on, follow steps 1-5 from the [prior page](/guides/custom-cicd-pipelines?step=2). +The setup for this pipeline will use the same steps as the prior page. Before moving on, follow steps 1-5 from the [prior page](https://docs.getdbt.com/guides/custom-cicd-pipelines?step=2). ### 1. Create a pipeline job that runs when PRs are created From c92ea423d52f1b4adb6271ef294f08836a6f4fd0 Mon Sep 17 00:00:00 2001 From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com> Date: Tue, 2 Jan 2024 17:06:26 -0800 Subject: [PATCH 183/204] Update constraints.md --- website/docs/reference/resource-properties/constraints.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-properties/constraints.md b/website/docs/reference/resource-properties/constraints.md index 9a5d513d99b..841307c4025 100644 --- a/website/docs/reference/resource-properties/constraints.md +++ b/website/docs/reference/resource-properties/constraints.md @@ -270,7 +270,7 @@ models: - type: check # not supported -- will warn & skip expression: "id > 0" tests: - - unique # primary_key constraint is not enforced + - unique # need this test because primary_key constraint is not enforced - name: customer_name data_type: text - name: first_transaction_date From 311b48741bbdb346c362e06fd02f2fd99001a785 Mon Sep 17 00:00:00 2001 From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com> Date: Tue, 2 Jan 2024 17:23:15 -0800 Subject: [PATCH 184/204] Update pull_request_template.md --- .github/pull_request_template.md | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 309872dd818..0534dd916cb 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -1,6 +1,6 @@ ## What are you changing in this pull request and why? @@ -16,11 +16,8 @@ Uncomment if you're publishing docs for a prerelease version of dbt (delete if n - [ ] For [docs versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#about-versioning), review how to [version a whole page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) and [version a block of content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content). - [ ] Add a checklist item for anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." -Adding new pages (delete if not applicable): -- [ ] Add page to `website/sidebars.js` -- [ ] Provide a unique filename for the new page - -Removing or renaming existing pages (delete if not applicable): -- [ ] Remove page from `website/sidebars.js` -- [ ] Add an entry `website/static/_redirects` -- [ ] Run link testing locally with `npm run build` to update the links that point to the deleted page +Adding or removing pages (delete if not applicable): +- [ ] Add/remove page in `website/sidebars.js` +- [ ] Provide a unique filename for new pages +- [ ] Add an entry for deleted pages in `website/static/_redirects` +- [ ] Run link testing locally with `npm run build` to update the links that point to deleted pages From 7d00d4db3242b4c035071e39b52f8c1d47bf2390 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 3 Jan 2024 07:45:55 -0500 Subject: [PATCH 185/204] Update website/docs/docs/dbt-cloud-apis/sl-graphql.md --- website/docs/docs/dbt-cloud-apis/sl-graphql.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-cloud-apis/sl-graphql.md b/website/docs/docs/dbt-cloud-apis/sl-graphql.md index 6a92e11d7c0..56a4ac7ba59 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-graphql.md +++ b/website/docs/docs/dbt-cloud-apis/sl-graphql.md @@ -26,7 +26,7 @@ The dbt Semantic Layer GraphQL API allows you to explore and query metrics and d dbt Partners can use the Semantic Layer GraphQL API to build an integration with the dbt Semantic Layer. -Note, the dbt Semantic Layer API doesn't currently support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. +Note that the dbt Semantic Layer API doesn't support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into your Semantic Layer metric definitions as code. ## Requirements to use the GraphQL API - A dbt Cloud project on dbt v1.6 or higher From 0af829d93d0fae1f27eb80c616f01cddecf68284 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 3 Jan 2024 07:46:09 -0500 Subject: [PATCH 186/204] Update website/docs/docs/dbt-cloud-apis/sl-jdbc.md --- website/docs/docs/dbt-cloud-apis/sl-jdbc.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md index 6ec916b7719..97f70902c74 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md +++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md @@ -33,7 +33,7 @@ You *may* be able to use our JDBC API with tools that do not have an official in Refer to [Get started with the dbt Semantic Layer](/docs/use-dbt-semantic-layer/quickstart-sl) for more info. -Note, the dbt Semantic Layer API doesn't currently support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. +Note that the dbt Semantic Layer API doesn't support `ref` to call dbt objects. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into your Semantic Layer metric definitions as code. ## Authentication From 53d7c83141a921f59f506d4f3cca5819417c321a Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 3 Jan 2024 07:46:45 -0500 Subject: [PATCH 187/204] Update website/docs/guides/sl-migration.md --- website/docs/guides/sl-migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/guides/sl-migration.md b/website/docs/guides/sl-migration.md index 69c964882f1..1e68946c8b3 100644 --- a/website/docs/guides/sl-migration.md +++ b/website/docs/guides/sl-migration.md @@ -43,7 +43,7 @@ dbt Labs recommends completing these steps in a local dev environment (such as t 6. Push these changes to a new branch in your repo. :::info `ref` not supported -The dbt Semantic Layer API doesn't support `ref` to call dbt objects. This is currently due to differences in architecture between the legacy Semantic Layer and the re-released Semantic Layer. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into the Semantic Layer as code. +The dbt Semantic Layer API doesn't support `ref` to call dbt objects. This is currently due to differences in architecture between the legacy Semantic Layer and the re-released Semantic Layer. Instead, use the complete qualified table name. If you're using dbt macros at query time to calculate your metrics, you should move those calculations into your Semantic Layer metric definitions as code. ::: **To make this process easier, dbt Labs provides a [custom migration tool](https://github.com/dbt-labs/dbt-converter) that automates these steps for you. You can find installation instructions in the [README](https://github.com/dbt-labs/dbt-converter/blob/master/README.md). Derived metrics aren’t supported in the migration tool, and will have to be migrated manually.* From f540fcbbcd457f89e87b4d79594cb3315ba74852 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 3 Jan 2024 09:39:24 -0500 Subject: [PATCH 188/204] Update website/docs/guides/sl-migration.md Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> --- website/docs/guides/sl-migration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/guides/sl-migration.md b/website/docs/guides/sl-migration.md index 1e68946c8b3..afa181646e3 100644 --- a/website/docs/guides/sl-migration.md +++ b/website/docs/guides/sl-migration.md @@ -26,7 +26,7 @@ dbt Labs recommends completing these steps in a local dev environment (such as t 1. Upgrade the metrics configs in your project to the new spec.* 1. Delete your old metrics file or remove the `.yml` file extension so they're ignored at parse time. Remove the `dbt-metrics` package from your project. Remove any macros that reference `dbt-metrics`, like `metrics.calculate()`. Make sure that any packages you’re using don't have references to the old metrics spec. 1. Install the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) to run MetricFlow commands and define your semantic model configurations. - - If you're using dbt Core, you can install the [MetricFlow CLI](/docs/build/metricflow-commands) with `python -m pip install "dbt-metricflow[your_adapter_name]"`. For example: + - If you're using dbt Core, install the [MetricFlow CLI](/docs/build/metricflow-commands) with `python -m pip install "dbt-metricflow[your_adapter_name]"`. For example: ```bash python -m pip install "dbt-metricflow[snowflake]" From c3e07e7dde5cd85271d88c16d81c49076bf31a4a Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 3 Jan 2024 12:51:17 -0500 Subject: [PATCH 189/204] Update index.js --- website/src/components/detailsToggle/index.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/src/components/detailsToggle/index.js b/website/src/components/detailsToggle/index.js index ba53192e54b..076d053846c 100644 --- a/website/src/components/detailsToggle/index.js +++ b/website/src/components/detailsToggle/index.js @@ -40,7 +40,7 @@ useEffect(() => { onMouseLeave={handleMouseLeave} >   - {alt_header} + {alt_header} {/* Visual disclaimer */} Hover to view From cf0e36f1b7031edf994525ccb6278253d54d3959 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Wed, 3 Jan 2024 12:51:27 -0500 Subject: [PATCH 190/204] Update styles.module.css --- website/src/components/detailsToggle/styles.module.css | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/website/src/components/detailsToggle/styles.module.css b/website/src/components/detailsToggle/styles.module.css index 446d3197128..b9c2f09df06 100644 --- a/website/src/components/detailsToggle/styles.module.css +++ b/website/src/components/detailsToggle/styles.module.css @@ -1,9 +1,11 @@ -:local(.link) { +:local(.link) :local(.headerText) { color: var(--ifm-link-color); - transition: background-color 0.3s; /* Smooth transition for background color */ + text-decoration: none; + transition: text-decoration 0.3s; /* Smooth transition */ } -:local(.link:hover), :local(.link:focus) { +:local(.link:hover) :local(.headerText), +:local(.link:focus) :local(.headerText) { text-decoration: underline; cursor: pointer; } @@ -12,6 +14,7 @@ font-size: 0.8em; color: #666; margin-left: 10px; /* Adjust as needed */ + text-decoration: none; } :local(.toggle) { From c6296980b40eb2867fe93bc420004f7a9aa98535 Mon Sep 17 00:00:00 2001 From: mirnawong1 Date: Wed, 3 Jan 2024 16:08:20 -0500 Subject: [PATCH 191/204] remove underline --- website/src/components/faqs/index.js | 2 +- website/src/components/faqs/styles.module.css | 8 ++++++-- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/website/src/components/faqs/index.js b/website/src/components/faqs/index.js index 58b59227cfb..0741a29cd89 100644 --- a/website/src/components/faqs/index.js +++ b/website/src/components/faqs/index.js @@ -69,7 +69,7 @@ function FAQ({ path, alt_header = null }) {
-  {alt_header || (fileContent?.meta && fileContent.meta.title)} + {alt_header || (fileContent?.meta && fileContent.meta.title)} Hover to view
diff --git a/website/src/components/faqs/styles.module.css b/website/src/components/faqs/styles.module.css index 9ce7d4d8a40..c179aa85cdc 100644 --- a/website/src/components/faqs/styles.module.css +++ b/website/src/components/faqs/styles.module.css @@ -1,9 +1,12 @@ -:local(.link) { +:local(.link) :local(.headerText) { color: var(--ifm-link-color); + text-decoration: none; + transition: text-decoration 0.3s; /* Smooth transition */ } -:local(.link:hover) { +:local(.link:hover) :local(.headerText), +:local(.link:focus) :local(.headerText) { text-decoration: underline; cursor: pointer; } @@ -28,6 +31,7 @@ font-size: 0.8em; color: #666; margin-left: 10px; /* Adjust as needed */ + text-decoration: none; } :local(.body) { From 390974eb642b0d8ca59f965ba7dc1391896898bc Mon Sep 17 00:00:00 2001 From: Ly Nguyen <107218380+nghi-ly@users.noreply.github.com> Date: Wed, 3 Jan 2024 16:41:31 -0800 Subject: [PATCH 192/204] Remove callout from page (#4704) ## What are you changing in this pull request and why? Remove the callout from the top of this page https://docs.getdbt.com/docs/core/connect-data-platform/dremio-setup For more context, see [slack convo](https://dbt-labs.slack.com/archives/C02NCQ9483C/p1704307434776619) ## Checklist - [x] Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) so my content adheres to these guidelines. - [x] For [docs versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#about-versioning), review how to [version a whole page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) and [version a block of content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content). --- .../docs/docs/core/connect-data-platform/dremio-setup.md | 6 ------ 1 file changed, 6 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/dremio-setup.md b/website/docs/docs/core/connect-data-platform/dremio-setup.md index 839dd8cffa8..21d0ee2956b 100644 --- a/website/docs/docs/core/connect-data-platform/dremio-setup.md +++ b/website/docs/docs/core/connect-data-platform/dremio-setup.md @@ -15,12 +15,6 @@ meta: config_page: '/reference/resource-configs/no-configs' --- -:::info Vendor plugin - -Some core functionality may be limited. If you're interested in contributing, check out the source code for each repository listed below. - -::: - import SetUpPages from '/snippets/_setup-pages-intro.md'; From d5a6cde13f3c499b3c99995dcb1c3860c8fab0fe Mon Sep 17 00:00:00 2001 From: Emiel Verkade Date: Thu, 4 Jan 2024 14:58:41 +0000 Subject: [PATCH 193/204] Fix typo in `append_new_columns` header --- website/docs/reference/resource-configs/vertica-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/vertica-configs.md b/website/docs/reference/resource-configs/vertica-configs.md index 598bc3fecee..90badfe29ad 100644 --- a/website/docs/reference/resource-configs/vertica-configs.md +++ b/website/docs/reference/resource-configs/vertica-configs.md @@ -99,7 +99,7 @@ You can use `on_schema_change` parameter with values `ignore`, `fail` and `appen -#### Configuring the `apppend_new_columns` parameter +#### Configuring the `append_new_columns` parameter Date: Thu, 4 Jan 2024 12:35:10 -0500 Subject: [PATCH 194/204] Add callout to PrivateLink docs about Environment Variables (#4701) ## What are you changing in this pull request and why? This PR adds a callout snippet regarding the limitation in terms of using custom Environment Variables to populate the `hostname` of a PrivateLink endpoint (for the warehouses that support it). This is not supported in dbt Cloud and the suggested workaround is to use Extended Attributes. Instructions were updated for: - Databricks - Redshift - Postgres Instructions were _not_ updated for: - Snowflake - VCS [MUL-512](https://dbtlabs.atlassian.net/browse/MUL-512) ![image](https://github.com/dbt-labs/docs.getdbt.com/assets/4997781/69f207e1-e288-405f-a118-69fa4624674c) ## Checklist - [x] Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) so my content adheres to these guidelines. - [x] For [docs versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#about-versioning), review how to [version a whole page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) and [version a block of content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content). - [x] Add a checklist item for anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." [MUL-512]: https://dbtlabs.atlassian.net/browse/MUL-512?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ --- website/docs/docs/cloud/secure/about-privatelink.md | 5 ++++- website/snippets/_privatelink-hostname-restriction.md | 5 +++++ 2 files changed, 9 insertions(+), 1 deletion(-) create mode 100644 website/snippets/_privatelink-hostname-restriction.md diff --git a/website/docs/docs/cloud/secure/about-privatelink.md b/website/docs/docs/cloud/secure/about-privatelink.md index 2134ab25cfe..731cef3f019 100644 --- a/website/docs/docs/cloud/secure/about-privatelink.md +++ b/website/docs/docs/cloud/secure/about-privatelink.md @@ -6,10 +6,11 @@ sidebar_label: "About PrivateLink" --- import SetUpPages from '/snippets/_available-tiers-privatelink.md'; +import PrivateLinkHostnameWarning from '/snippets/_privatelink-hostname-restriction.md'; -PrivateLink enables a private connection from any dbt Cloud Multi-Tenant environment to your data platform hosted on AWS using [AWS PrivateLink](https://aws.amazon.com/privatelink/) technology. PrivateLink allows dbt Cloud customers to meet security and compliance controls as it allows connectivity between dbt Cloud and your data platform without traversing the public internet. This feature is supported in most regions across NA, Europe, and Asia, but [contact us](https://www.getdbt.com/contact/) if you have questions about availability. +PrivateLink enables a private connection from any dbt Cloud Multi-Tenant environment to your data platform hosted on AWS using [AWS PrivateLink](https://aws.amazon.com/privatelink/) technology. PrivateLink allows dbt Cloud customers to meet security and compliance controls as it allows connectivity between dbt Cloud and your data platform without traversing the public internet. This feature is supported in most regions across NA, Europe, and Asia, but [contact us](https://www.getdbt.com/contact/) if you have questions about availability. ### Cross-region PrivateLink @@ -24,3 +25,5 @@ dbt Cloud supports the following data platforms for use with the PrivateLink fea - [Redshift](/docs/cloud/secure/redshift-privatelink) - [Postgres](/docs/cloud/secure/postgres-privatelink) - [VCS](/docs/cloud/secure/vcs-privatelink) + + diff --git a/website/snippets/_privatelink-hostname-restriction.md b/website/snippets/_privatelink-hostname-restriction.md new file mode 100644 index 00000000000..a4bcd318a15 --- /dev/null +++ b/website/snippets/_privatelink-hostname-restriction.md @@ -0,0 +1,5 @@ +:::caution Environment variables + +Using [Environment variables](/docs/build/environment-variables) when configuring PrivateLink endpoints isn't supported in dbt Cloud. Instead, use [Extended Attributes](/docs/deploy/deploy-environments#extended-attributes) to dynamically change these values in your dbt Cloud environment. + +::: From d6881953e3e5f8b2b2e1cf4284c5f9c171944377 Mon Sep 17 00:00:00 2001 From: "Leona B. Campbell" <3880403+runleonarun@users.noreply.github.com> Date: Thu, 4 Jan 2024 10:22:50 -0800 Subject: [PATCH 195/204] Update debug-method to state dev env only (#4710) ## What are you changing in this pull request and why? ## Checklist - [ ] Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) so my content adheres to these guidelines. - [ ] For [docs versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#about-versioning), review how to [version a whole page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) and [version a block of content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content). - [ ] Add a checklist item for anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." Adding or removing pages (delete if not applicable): - [ ] Add/remove page in `website/sidebars.js` - [ ] Provide a unique filename for new pages - [ ] Add an entry for deleted pages in `website/static/_redirects` - [ ] Run link testing locally with `npm run build` to update the links that point to deleted pages --- website/docs/reference/dbt-jinja-functions/debug-method.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/reference/dbt-jinja-functions/debug-method.md b/website/docs/reference/dbt-jinja-functions/debug-method.md index 0938970b50c..778ad095693 100644 --- a/website/docs/reference/dbt-jinja-functions/debug-method.md +++ b/website/docs/reference/dbt-jinja-functions/debug-method.md @@ -6,9 +6,9 @@ description: "The `{{ debug() }}` macro will open an iPython debugger." --- -:::caution New in v0.14.1 +:::warning Development environment only -The `debug` macro is new in dbt v0.14.1, and is only intended to be used in a development context with dbt. Do not deploy code to production which uses the `debug` macro. +The `debug` macro is only intended to be used in a development context with dbt. Do not deploy code to production that uses the `debug` macro. ::: From c5c377039644aa2648b0a523e0dc1834ea26c8b8 Mon Sep 17 00:00:00 2001 From: Topherhindman Date: Fri, 5 Jan 2024 07:23:40 -0800 Subject: [PATCH 196/204] update link to materializations source code (#4707) ## What are you changing in this pull request and why? This PR updates various links after a [PR in dbt Core](https://github.com/dbt-labs/dbt-core/pull/8906) moved some files. Additionally, it updates some of the links to point to the correct line of code. ## Checklist - [x] Review the [Content style guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md) so my content adheres to these guidelines. - [ ] For [docs versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#about-versioning), review how to [version a whole page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version) and [version a block of content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content). - [ ] Add a checklist item for anything that needs to happen before this PR is merged, such as "needs technical review" or "change base branch." Adding or removing pages (delete if not applicable): - [ ] Add/remove page in `website/sidebars.js` - [ ] Provide a unique filename for new pages - [ ] Add an entry for deleted pages in `website/static/_redirects` - [ ] Run link testing locally with `npm run build` to update the links that point to deleted pages Co-authored-by: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> --- website/docs/guides/create-new-materializations.md | 2 +- website/docs/reference/resource-configs/full_refresh.md | 2 +- website/docs/reference/resource-configs/store_failures.md | 2 +- website/docs/reference/resource-configs/strategy.md | 4 ++-- website/docs/reference/resource-configs/where.md | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/website/docs/guides/create-new-materializations.md b/website/docs/guides/create-new-materializations.md index af2732c0c39..52a8594b0d2 100644 --- a/website/docs/guides/create-new-materializations.md +++ b/website/docs/guides/create-new-materializations.md @@ -13,7 +13,7 @@ recently_updated: true ## Introduction -The model materializations you're familiar with, `table`, `view`, and `incremental` are implemented as macros in a package that's distributed along with dbt. You can check out the [source code for these materializations](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/include/global_project/macros/materializations). If you need to create your own materializations, reading these files is a good place to start. Continue reading below for a deep-dive into dbt materializations. +The model materializations you're familiar with, `table`, `view`, and `incremental` are implemented as macros in a package that's distributed along with dbt. You can check out the [source code for these materializations](https://github.com/dbt-labs/dbt-core/tree/main/core/dbt/adapters/include/global_project/macros/materializations). If you need to create your own materializations, reading these files is a good place to start. Continue reading below for a deep-dive into dbt materializations. :::caution diff --git a/website/docs/reference/resource-configs/full_refresh.md b/website/docs/reference/resource-configs/full_refresh.md index f75fe3a583b..c7f1b799087 100644 --- a/website/docs/reference/resource-configs/full_refresh.md +++ b/website/docs/reference/resource-configs/full_refresh.md @@ -74,7 +74,7 @@ Optionally set a resource to always or never full-refresh. -This logic is encoded in the [`should_full_refresh()`](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/materializations/configs.sql#L6) macro. +This logic is encoded in the [`should_full_refresh()`](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/adapters/include/global_project/macros/materializations/configs.sql#L6) macro. ## Usage diff --git a/website/docs/reference/resource-configs/store_failures.md b/website/docs/reference/resource-configs/store_failures.md index 2c596d1cf3e..8a83809152b 100644 --- a/website/docs/reference/resource-configs/store_failures.md +++ b/website/docs/reference/resource-configs/store_failures.md @@ -12,7 +12,7 @@ Optionally set a test to always or never store its failures in the database. - If the `store_failures` config is `none` or omitted, the resource will use the value of the `--store-failures` flag. - When true, `store_failures` save all the record(s) that failed the test only if [limit](/reference/resource-configs/limit) is not set or if there are fewer records than the limit. `store_failures` are saved in a new table with the name of the test. By default, `store_failures` use a schema named `dbt_test__audit`, but, you can [configure](/reference/resource-configs/schema#tests) the schema to a different value. -This logic is encoded in the [`should_store_failures()`](https://github.com/dbt-labs/dbt-core/blob/98c015b7754779793e44e056905614296c6e4527/core/dbt/include/global_project/macros/materializations/helpers.sql#L77) macro. +This logic is encoded in the [`should_store_failures()`](https://github.com/dbt-labs/dbt-core/blob/77632122974b28967221758b4a470d7dfb608ac2/core/dbt/adapters/include/global_project/macros/materializations/configs.sql#L15) macro. diff --git a/website/docs/reference/resource-configs/strategy.md b/website/docs/reference/resource-configs/strategy.md index 3cef8b0df51..2bfcf0a94e4 100644 --- a/website/docs/reference/resource-configs/strategy.md +++ b/website/docs/reference/resource-configs/strategy.md @@ -132,8 +132,8 @@ This is a **required configuration**. There is no default value. ### Advanced: define and use custom snapshot strategy Behind the scenes, snapshot strategies are implemented as macros, named `snapshot__strategy` -* [Source code](https://github.com/dbt-labs/dbt-core/blob/HEAD/core/dbt/include/global_project/macros/materializations/snapshots/strategies.sql#L65) for the timestamp strategy -* [Source code](https://github.com/dbt-labs/dbt-core/blob/HEAD/core/dbt/include/global_project/macros/materializations/snapshots/strategies.sql#L131) for the check strategy +* [Source code](https://github.com/dbt-labs/dbt-core/blob/HEAD/core/dbt/adapters/include/global_project/macros/materializations/snapshots/strategies.sql#L52) for the timestamp strategy +* [Source code](https://github.com/dbt-labs/dbt-core/blob/HEAD/core/dbt/adapters/include/global_project/macros/materializations/snapshots/strategies.sql#L136) for the check strategy It's possible to implement your own snapshot strategy by adding a macro with the same naming pattern to your project. For example, you might choose to create a strategy which records hard deletes, named `timestamp_with_deletes`. diff --git a/website/docs/reference/resource-configs/where.md b/website/docs/reference/resource-configs/where.md index dbb3b66e901..fe83c22847d 100644 --- a/website/docs/reference/resource-configs/where.md +++ b/website/docs/reference/resource-configs/where.md @@ -122,7 +122,7 @@ tests: The rendering context for the `where` config is the same as for all configurations defined in `.yml` files. You have access to `{{ var() }}` and `{{ env_var() }}`, but you **do not** have access to custom macros for setting this config. If you do want to use custom macros to template out the `where` filter for certain tests, there is a workaround. -As of v0.21, dbt defines a [`get_where_subquery` macro](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/materializations/tests/where_subquery.sql). +As of v0.21, dbt defines a [`get_where_subquery` macro](https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/adapters/include/global_project/macros/materializations/tests/where_subquery.sql). dbt replaces `{{ model }}` in generic test definitions with `{{ get_where_subquery(relation) }}`, where `relation` is a `ref()` or `source()` for the resource being tested. The default implementation of this macro returns: - `{{ relation }}` when the `where` config is not defined (`ref()` or `source()`) From ca6b6ca8da116f79f01bf62d9082074095ea5aba Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Fri, 5 Jan 2024 18:46:14 -0500 Subject: [PATCH 197/204] add dbt mesh faq page (#4702) this PR adds the [dbt mesh faqs](https://www.notion.so/dbtlabs/WIP-dbt-Mesh-Customer-FAQ-bdadabc9917a4be697db53142c6ff979#073b11b13ffd47c28c145e1ba386f4b8) that our team are seeing users frequently asked. having a public facing page will help users address those questions directly. --------- Co-authored-by: Leona B. Campbell <3880403+runleonarun@users.noreply.github.com> Co-authored-by: Matt Shaver <60105315+matthewshaver@users.noreply.github.com> Co-authored-by: azzam34 <86269359+azzam34@users.noreply.github.com> --- .../how-we-mesh/mesh-1-intro.md | 2 + .../best-practices/how-we-mesh/mesh-4-faqs.md | 317 ++++++++++++++++++ website/sidebars.js | 1 + 3 files changed, 320 insertions(+) create mode 100644 website/docs/best-practices/how-we-mesh/mesh-4-faqs.md diff --git a/website/docs/best-practices/how-we-mesh/mesh-1-intro.md b/website/docs/best-practices/how-we-mesh/mesh-1-intro.md index 0f27e64c447..fcd379de9cf 100644 --- a/website/docs/best-practices/how-we-mesh/mesh-1-intro.md +++ b/website/docs/best-practices/how-we-mesh/mesh-1-intro.md @@ -32,6 +32,8 @@ dbt Cloud is designed to coordinate the features above and simplify the complexi If you're just starting your dbt journey, don't worry about building a multi-project architecture right away. You can _incrementally_ adopt the features in this guide as you scale. The collection of features work effectively as independent tools. Familiarizing yourself with the tooling and features that make up a multi-project architecture, and how they can apply to your organization will help you make better decisions as you grow. +For additional information, refer to the [dbt Mesh FAQs](/best-practices/how-we-mesh/mesh-4-faqs). + ## Learning goals - Understand the **purpose and tradeoffs** of building a multi-project architecture. diff --git a/website/docs/best-practices/how-we-mesh/mesh-4-faqs.md b/website/docs/best-practices/how-we-mesh/mesh-4-faqs.md new file mode 100644 index 00000000000..7119a3d90bd --- /dev/null +++ b/website/docs/best-practices/how-we-mesh/mesh-4-faqs.md @@ -0,0 +1,317 @@ +--- +title: "dbt Mesh FAQs" +description: "Read the FAQs to learn more about dbt Mesh, how it works, compatibility, and more." +hoverSnippet: "dbt Mesh FAQs" +sidebar_label: "dbt Mesh FAQs" +--- + +dbt Mesh is a new architecture enabled by dbt Cloud. It allows you to better manage complexity by deploying multiple interconnected dbt projects instead of a single large, monolithic project. It’s designed to accelerate development, without compromising governance. + +## Overview of Mesh + + + +Here are some benefits of implementing dbt Mesh: + +* **Ship data products faster**: With a more modular architecture, teams can make changes rapidly and independently in specific areas without impacting the entire system, leading to faster development cycles. +* **Improve trust in data:** Adopting dbt Mesh helps ensure that changes in one domain's data models do not unexpectedly break dependencies in other domain areas, leading to a more secure and predictable data environment. +* **Reduce complexity**: By organizing transformation logic into distinct domains, dbt Mesh reduces the complexity inherent in large, monolithic projects, making them easier to manage and understand. +* **Improve collaboration**: Teams are able to share and build upon each other's work without duplicating efforts. + +Most importantly, all this can be accomplished without the central data team losing the ability to see lineage across the entire organization, or compromising on governance mechanisms. + + + + + +dbt [model contracts](/docs/collaborate/govern/model-contracts) serve as a governance tool enabling the definition and enforcement of data structure standards in your dbt models. They allow you to specify and uphold data model guarantees, including column data types, allowing for the stability of dependent models. Should a model fail to adhere to its established contracts, it will not build successfully. + + + + + +dbt [model versions](https://docs.getdbt.com/docs/collaborate/govern/model-versions) are iterations of your dbt models made over time. In many cases, you might knowingly choose to change a model’s structure in a way that “breaks” the previous model contract, and may break downstream queries depending on that model’s structure. When you do so, creating a new version of the model is useful to signify this change. + +You can use model versions to: + +- Test "prerelease" changes (in production, in downstream systems). +- Bump the latest version, to be used as the canonical "source of truth." +- Offer a migration window off the "old" version. + + + + + +A [model access modifier](/docs/collaborate/govern/model-access) in dbt determines if a model is accessible as an input to other dbt models and projects. It specifies where a model can be referenced using [the `ref` function](/reference/dbt-jinja-functions/ref). There are three types of access modifiers: + +1. **Private:** A model with a private access modifier is only referenceable by models within the same group. This is intended for models that are implementation details and are meant to be used only within a specific group of related models. +2. **Protected:** Models with a protected access modifier can be referenced by any other model within the same dbt project or when the project is installed as a package. This is the default setting for all models, ensuring backward compatibility, especially when groups are assigned to an existing set of models. +3. **Public:** A public model can be referenced across different groups, packages, or projects. This is suitable for stable and mature models that serve as interfaces for other teams or projects. + + + + + +A [model group](/docs/collaborate/govern/model-access#groups) in dbt is a concept used to organize models under a common category or ownership. This categorization can be based on various criteria, such as the team responsible for the models or the specific data source they model. + + + + + +This is a new way of working, and the intentionality required to build, and then maintain, cross-project interfaces and dependencies may feel like a slowdown versus what some developers are used to. The intentional friction introduced promotes thoughtful changes, fostering a mindset that values stability and systematic adjustments over rapid transformations. + +Orchestration across multiple projects is also likely to be slightly more challenging for many organizations, although we’re currently developing new functionality that will make this process simpler. + + + + + +dbt Mesh allows you to better _operationalize_ data mesh by enabling decentralized, domain-specific data ownership and collaboration. + +In data mesh, each business domain is responsible for its data as a product. This is the same goal that dbt Mesh facilitates by enabling organizations to break down large, monolithic data projects into smaller, domain-specific dbt projects. Each team or domain can independently develop, maintain, and share its data models, fostering a decentralized data environment. + +dbt Mesh also enhances the interoperability and reusability of data across different domains, a key aspect of the data mesh philosophy. By allowing cross-project references and shared governance through model contracts and access controls, dbt Mesh ensures that while data ownership is decentralized, there is still a governed structure to the overall data architecture. + + + +## How dbt Mesh works + + + +Like resource dependencies, project dependencies are acyclic, meaning they only move in one direction. This prevents `ref` cycles (or loops). For example, if project B depends on project A, a new model in project A could not import and use a public model from project B. Refer to [Project dependencies](/docs/collaborate/govern/project-dependencies#how-to-use-ref) for more information. + + + + + +While it’s not currently possible to share sources across projects, it would be possible to have a shared foundational project, with staging models on top of those sources, exposed as “public” models to other teams/projects. + + + + + +This would be a breaking change for downstream consumers of that model. If the maintainers of the upstream project wish to remove the model (or “downgrade” its access modifier, effectively the same thing), they should mark that model for deprecation (using [deprecation_date](/reference/resource-properties/deprecation_date)), which will deliver a warning to all downstream consumers of that model. + +In the future, we plan for dbt Cloud to also be able to proactively flag this scenario in [continuous integration](/docs/deploy/continuous-integration) for the maintainers of the upstream public model. + + + + + +No, unless downstream projects are installed as [packages](/docs/build/packages) (source code). In that case, the models in project installed as a project become “your” models, and you can select or run them. There are cases in which this can be desirable; see docs on [project dependencies](/docs/collaborate/govern/project-dependencies). + + + + + +Yes, as long as they’re in the same data platform (BigQuery, Databricks, Redshift, Snowflake, etc.) and you have configured permissions and sharing in that data platform provider to allow this. + + + + + +Yes, because the cross-project collaboration is done using the `{{ ref() }}` macro, you can use those models from other teams in [singular tests](/docs/build/data-tests#singular-data-tests). + + + + + +Each team defines their connection to the data warehouse, and the default schema names for dbt to use when materializing datasets. + +By default, each project belonging to a team will create: + +- One schema for production runs (for example, `finance`). +- One schema per developer (for example, `dev_jerco`). + +Depending on each team’s needs, this can be customized with model-level [schema configurations](/docs/build/custom-schemas), including the ability to define different rules by environment. + + + + + +No, contracts can only be applied at the [model level](/docs/collaborate/govern/model-contracts). It is a recommended best practice to [define staging models](/best-practices/how-we-structure/2-staging) on top of sources, and it is possible to define contracts on top of those staging models. + + + + + +No. A contract applies to an entire model, including all columns in the model’s output. This is the same set of columns that a consumer would see when viewing the model’s details in Explorer, or when querying the model in the data platform. + +- If you wish to contract only a subset of columns, you can create a separate model (materialized as a view) selecting only that subset. +- If you wish to limit which rows or columns a downstream consumer can see when they query the model’s data, depending on who they are, some data platforms offer advanced capabilities around dynamic row-level access and column-level data masking. + + + + + +No, a [group](/docs/collaborate/govern/model-access#groups) can only be assigned to a single owner. However, the assigned owner can be a _team_, rather than an individual. + + + + + +Not directly, but contracts are [assigned to models](/docs/collaborate/govern/model-contracts) and models can be assigned to individual owners. You can use meta fields for this purpose. + + + + + +This is not currently possible, but something we hope to enable in the near future. If you’re interested in this functionality, please reach out to your dbt Labs account team. + + + + + +dbt Cloud will soon offer the capability to trigger jobs on the completion of another job, including a job in a different project. This offers one mechanism for executing a pipeline from start to finish across projects. + + + + + +Yes. In addition to being viewable natively through [dbt Explorer](https://www.getdbt.com/product/dbt-explorer), it is possible to view cross-project lineage connect using partner integrations with data cataloging tools. For a list of available dbt Cloud integrations, refer to the [Integrations page](https://www.getdbt.com/product/integrations). + + + + + +Tests and model contracts in dbt help eliminate the need to restate data in the first place. With these tools, you can incorporate checks at the source and output layers of your dbt projects to assess data quality in the most critical places. When there are changes in transformation logic (for example, the definition of a particular column is changed), restating the data is as easy as merging the updated code and running a dbt Cloud job. + +If a data quality issue does slip through, you also have the option of simply rolling back the git commit, and then re-running the dbt Cloud job with the old code. + + + + + +Yes, all of this metadata is accessible via the [dbt Cloud Admin API](/docs/dbt-cloud-apis/admin-cloud-api). This metadata can be fed into a monitoring tool, or used to create reports and dashboards. + +We also expose some of this information in dbt Cloud itself in [jobs](/docs/deploy/jobs), [environments](/docs/environments-in-dbt) and in [dbt Explorer](https://www.getdbt.com/product/dbt-explorer). + + + +## Permissions and access + + + +The existence of projects that have at least one public model will be visible to everyone in the organization with [read-only access](/docs/cloud/manage-access/seats-and-users). + +Private or protected models require a user to have read-only access on the specific project in order to see its existence. + + + + + +There’s model-level access within dbt, role-based access for users and groups in dbt Cloud, and access to the underlying data within the data platform. + +First things first: access to underlying data is always defined and enforced by the underlying data platform (for example, BigQuery, Databricks, Redshift, Snowflake, Starburst, etc.) This access is managed by executing “DCL statements” (namely `grant`). dbt makes it easy to [configure `grants` on models](/reference/resource-configs/grants), which provision data access for other roles/users/groups in the data warehouse. However, dbt does _not_ automatically define or coordinate those grants unless they are configured explicitly. Refer to your organization's system for managing data warehouse permissions. + +[dbt Cloud Enterprise plans](https://www.getdbt.com/pricing) support [role-based access control (RBAC)](/docs/cloud/manage-access/enterprise-permissions#how-to-set-up-rbac-groups-in-dbt-cloud) that manages granular permissions for users and user groups. You can control which users can see or edit all aspects of a dbt Cloud project. A user’s access to dbt Cloud projects also determines whether they can “explore” that project in detail. Roles, users, and groups are defined within the dbt Cloud application via the UI or by integrating with an identity provider. + +[Model access](/docs/collaborate/govern/model-access) defines where models can be referenced. It also informs the discoverability of those projects within dbt Explorer. Model `access` is defined in code, just like any other model configuration (`materialized`, `tags`, etc). + +**Public:** Models with `public` access can be referenced everywhere. These are the “data products” of your organization. + +**Protected:** Models with `protected` access can only be referenced within the same project. This is the default level of model access. +We are discussing a future extension to `protected` models to allow for their reference in _specific_ downstream projects. Please read [the GitHub issue](https://github.com/dbt-labs/dbt-core/issues/9340), and upvote/comment if you’re interested in this use case. + +**Private:** Model `groups` enable more-granular control over where `private` models can be referenced. By defining a group, and configuring models to belong to that group, you can restrict other models (not in the same group) from referencing any `private` models the group contains. Groups also provide a standard mechanism for defining the `owner` of all resources it contains. + +Within dbt Explorer, `public` models are discoverable for every user in the dbt Cloud account — every public model is listed in the “multi-project” view. By contrast, `protected` and `private` models in a project are visible only to users who have access to that project (including read-only access). + +Because dbt does not implicitly coordinate data warehouse `grants` with model-level `access`, it is possible for there to be a mismatch between them. For example, a `public` model’s metadata is viewable to all dbt Cloud users, anyone can write a `ref` to that model, but when they actually run or preview, they realize they do not have access to the underlying data in the data warehouse. **This is intentional.** In this way, your organization can retain least-privileged access to underlying data, while providing visibility and discoverability for the wider organization. Armed with the knowledge of which other “data products” (public models) exist — their descriptions, their ownership, which columns they contain — an analyst on another team can prepare a well-informed request for access to the underlying data. + + + + + +Not currently! But this is something we may evaluate for the future. + + + + + +Yes! As long as a user has permissions (at least read-only access) on all projects in a dbt Cloud account, they can navigate across the entirety of the organization’s DAG in dbt Explorer, and see models at all levels of detail. + + + + + +By default, cross-project references resolve to the “Production” deployment environment of the upstream project. If your organization has genuinely different data in production versus non-production environments, this poses an issue. + +For this reason, we will soon roll out a new canonical type of deployment environment: “Staging.” If a project defines both a “Production” environment and a “Staging” environment, then cross-project references from development and “Staging” environments will resolve to “Staging,” whereas only references coming from “Production” environments will resolve to “Production.” In this way, you are guaranteed separation of data environments, without needing to duplicate project configurations. + +If you’re interested in beta access to “Staging” environments, let your dbt Labs account representative know! + + + +## Compatibility with other features + + + +The [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) and dbt Mesh are complementary mechanisms enabled by dbt Cloud that work together to enhance the management, usability, and governance of data in large-scale data environments. + +The Semantic Layer in dbt Cloud allows teams to centrally define business metrics and dimensions. It ensures consistent and reliable metric definitions across various analytics tools and platforms. + +dbt Mesh enables organizations to split their data architecture into multiple domain-specific projects, while retaining the ability to reference “public” models across projects. It is also possible to reference a “public” model from another project for the purpose of defining semantic models and metrics. Your organization can have multiple dbt projects feed into a unified semantic layer, ensuring that metrics and dimensions are consistently defined and understood across these domains. + + + + + +**[dbt Explorer](/docs/collaborate/explore-projects)** is a tool within dbt Cloud that serves as a knowledge base and lineage visualization platform. It provides a comprehensive view of your dbt assets, including models, tests, sources, and their interdependencies. + +Used in conjunction with dbt Mesh, dbt Explorer becomes a powerful tool for visualizing and understanding the relationships and dependencies between models across multiple dbt projects. + + + + + +The [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) allows users to develop and run dbt commands from their preferred development environments, like VS Code, Sublime Text, or terminal interfaces. This flexibility is particularly beneficial in a dbt Mesh setup, where managing multiple projects can be complex. Developers can work in their preferred tools while leveraging the centralized capabilities of dbt Cloud. + + + +## Availability + + + +Yes, your account must be on [at least dbt v1.6](/docs/dbt-versions/upgrade-core-in-cloud) to take advantage of [cross-project dependencies](/docs/collaborate/govern/project-dependencies), one of the most crucial underlying capabilities required to implement a dbt Mesh. + + + + + +While dbt Core defines several of the foundational elements for dbt Mesh, dbt Cloud offers an enhanced experience that leverages these elements for scaled collaboration across multiple teams, facilitated by multi-project discovery in dbt Explorer that’s tailored to each user’s access. + +Several key components that underpin the dbt Mesh pattern, including model contracts, versions, and access modifiers, are defined and implemented in dbt Core. We believe these are components of the core language, which is why their implementations are open source. We want to define a standard pattern that analytics engineers everywhere can adopt, extend, and help us improve. + +To reference models defined in another project, users can also leverage [packages](/docs/build/packages), a longstanding feature of dbt Core. By importing an upstream project as a package, dbt will import all models defined in that project, which enables the resolution of cross-project references to those models. They can be [optionally restricted](/docs/collaborate/govern/model-access#how-do-i-restrict-access-to-models-defined-in-a-package) to just the models with `public` access. + +The major distinction comes with dbt Cloud's metadata service, which is unique to the dbt Cloud platform and allows for the resolution of references to only the public models in a project. This service enables users to take dependencies on upstream projects, and reference just their `public` models, *without* needing to load the full complexity of those upstream projects into their local development environment. + + + + + +Yes, a [dbt Cloud Enterprise](https://www.getdbt.com/pricing) plan is required to set up multiple projects and reference models across them. + + + +## Tips on implementing dbt Mesh + + + +Refer to our developer guide on [How we structure our dbt Mesh projects](https://docs.getdbt.com/best-practices/how-we-mesh/mesh-1-intro). You may also be interested in watching the recording of this talk from Coalesce 2023: [Unlocking model governance and multi-project deployments with dbt-meshify](https://www.youtube.com/watch?v=FAsY0Qx8EyU). + + + + + +`dbt-meshify` is a [CLI tool](https://github.com/dbt-labs/dbt-meshify) that automates the creation of model governance and cross-project lineage features introduced in dbt-core v1.5 and v1.6. This package will leverage your dbt project metadata to create and/or edit the files in your project to properly configure the models in your project with these features. + + + + +Let’s say your organization has fewer than 500 models and fewer than a dozen regular contributors to dbt. You're operating at a scale well served by the monolith (a single project), and the larger pattern of dbt Mesh probably won't provide any immediate benefits. + +It’s never too early to think about how you’re organizing models _within_ that project. Use model `groups` to define clear ownership boundaries and `private` access to restrict purpose-built models from becoming load-bearing blocks in an unrelated section of the DAG. Your future selves will thank you for defining these interfaces, especially if you reach a scale where it makes sense to “graduate” the interfaces between `groups` into boundaries between projects. + + diff --git a/website/sidebars.js b/website/sidebars.js index 6bb630037c1..27bcd1147a3 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -1046,6 +1046,7 @@ const sidebarSettings = { items: [ "best-practices/how-we-mesh/mesh-2-structures", "best-practices/how-we-mesh/mesh-3-implementation", + "best-practices/how-we-mesh/mesh-4-faqs", ], }, { From 084c9beea2510d4e44b86133f84d76e86f6845e8 Mon Sep 17 00:00:00 2001 From: Jeremy Yeo Date: Mon, 8 Jan 2024 11:45:26 +1300 Subject: [PATCH 198/204] Update lint-format.md --- website/docs/docs/cloud/dbt-cloud-ide/lint-format.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md index 733ec9dbcfe..39d08731814 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md @@ -233,6 +233,11 @@ Make sure you're on a development branch. Formatting or Linting isn't available — If your lint operation passes despite clear rule violations, confirm you're not linting models with ephemeral models. Linting doesn't support ephemeral models in dbt v1.5 and lower. +
+What are some other known limitations with dbt Cloud linting? +— Currently, the dbt Cloud IDE can only lint or fix files up to a certain size and complexity. If you attempt to lint or fix files that are too large (which would take more than 60 seconds for the dbt Cloud backend to process), you will see an 'Unable to complete linting this file' error. Please break up your model into smaller models (files) so that they are less complex to lint or fix. +
+ ## Related docs - [User interface](/docs/cloud/dbt-cloud-ide/ide-user-interface) From dfbd33abe147d2171c7eb57be0ec56e54df1d042 Mon Sep 17 00:00:00 2001 From: Jeremy Yeo Date: Mon, 8 Jan 2024 11:49:22 +1300 Subject: [PATCH 199/204] Update lint-format.md --- website/docs/docs/cloud/dbt-cloud-ide/lint-format.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md index 39d08731814..e6e7ffb510b 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md @@ -235,7 +235,7 @@ Make sure you're on a development branch. Formatting or Linting isn't available
What are some other known limitations with dbt Cloud linting? -— Currently, the dbt Cloud IDE can only lint or fix files up to a certain size and complexity. If you attempt to lint or fix files that are too large (which would take more than 60 seconds for the dbt Cloud backend to process), you will see an 'Unable to complete linting this file' error. Please break up your model into smaller models (files) so that they are less complex to lint or fix. +— Currently, the dbt Cloud IDE can only lint or fix files up to a certain size and complexity. If you attempt to lint or fix files that are too large (which would take more than 60 seconds for the dbt Cloud backend to process), you will see an 'Unable to complete linting this file' error. Please break up your model into smaller models (files) so that they are less complex to lint or fix. Note that linting is less complex than fixing so you may run into a scenario where a file can be linted but not fixed.
## Related docs From 08ec779537e4c25a01e1c8d068bd756a8817307f Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 8 Jan 2024 10:08:49 +0000 Subject: [PATCH 200/204] Update lint-format.md turn to toggle --- .../docs/cloud/dbt-cloud-ide/lint-format.md | 39 ++++++++----------- 1 file changed, 17 insertions(+), 22 deletions(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md index e6e7ffb510b..4c565596804 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md @@ -14,7 +14,7 @@ Linters analyze code for errors, bugs, and style issues, while formatters fix st -In the dbt Cloud IDE, you have the capability to perform linting, auto-fix, and formatting on five different file types: +In the dbt Cloud IDE, you can perform linting, auto-fix, and formatting on five different file types: - SQL — [Lint](#lint) and fix with SQLFluff, and [format](#format) with sqlfmt - YAML, Markdown, and JSON — Format with Prettier @@ -146,7 +146,7 @@ The Cloud IDE formatting integrations take care of manual tasks like code format To format your SQL code, dbt Cloud integrates with [sqlfmt](http://sqlfmt.com/), which is an uncompromising SQL query formatter that provides one way to format the SQL query and Jinja. -By default, the IDE uses sqlfmt rules to format your code, making the **Format** button available and convenient to use right away. However, if you have a file named .sqlfluff in the root directory of your dbt project, the IDE will default to SQLFluff rules instead. +By default, the IDE uses sqlfmt rules to format your code, making the **Format** button available and convenient to use immediately. However, if you have a file named .sqlfluff in the root directory of your dbt project, the IDE will default to SQLFluff rules instead. To enable sqlfmt: @@ -189,10 +189,8 @@ To format your Python code, dbt Cloud integrates with [Black](https://black.read ## FAQs -
-When should I use SQLFluff and when should I use sqlfmt? - -SQLFluff and sqlfmt are both tools used for formatting SQL code, but there are some differences that may make one preferable to the other depending on your use case.
+ +SQLFluff and sqlfmt are both tools used for formatting SQL code, but some differences may make one preferable to the other depending on your use case.
SQLFluff is a SQL code linter and formatter. This means that it analyzes your code to identify potential issues and bugs, and follows coding standards. It also formats your code according to a set of rules, which are [customizable](#customize-linting), to ensure consistent coding practices. You can also use SQLFluff to keep your SQL code well-formatted and follow styling best practices.
@@ -204,19 +202,17 @@ You can use either SQLFluff or sqlfmt depending on your preference and what work - Use sqlfmt to only have your code well-formatted without analyzing it for errors and bugs. You can use sqlfmt out of the box, making it convenient to use right away without having to configure it. -
+ -
-Can I nest .sqlfluff files? + To ensure optimal code quality, consistent code, and styles — it's highly recommended you have one main `.sqlfluff` configuration file in the root folder of your project. Having multiple files can result in various different SQL styles in your project.

However, you can customize and include an additional child `.sqlfluff` configuration file within specific subfolders of your dbt project.

By nesting a `.sqlfluff` file in a subfolder, SQLFluff will apply the rules defined in that subfolder's configuration file to any files located within it. The rules specified in the parent `.sqlfluff` file will be used for all other files and folders outside of the subfolder. This hierarchical approach allows for tailored linting rules while maintaining consistency throughout your project. Refer to [SQLFluff documentation](https://docs.sqlfluff.com/en/stable/configuration.html#configuration-files) for more info. -
+ -
-Can I run SQLFluff commands from the terminal? + Currently, running SQLFluff commands from the terminal isn't supported.
@@ -225,18 +221,17 @@ Currently, running SQLFluff commands from the terminal isn't supported. Why am I unable to see the Lint or Format button? Make sure you're on a development branch. Formatting or Linting isn't available on "main" or "read-only" branches. - + -
-Why is there inconsistent SQLFluff behavior when running outside the dbt Cloud IDE (such as a GitHub Action)? -— Double-check your SQLFluff version matches the one in dbt Cloud IDE (found in the Code Quality tab after a lint operation).

-— If your lint operation passes despite clear rule violations, confirm you're not linting models with ephemeral models. Linting doesn't support ephemeral models in dbt v1.5 and lower. -
+ +- Double-check that your SQLFluff version matches the one in dbt Cloud IDE (found in the Code Quality tab after a lint operation).

+- If your lint operation passes despite clear rule violations, confirm you're not linting models with ephemeral models. Linting doesn't support ephemeral models in dbt v1.5 and lower. +
-
-What are some other known limitations with dbt Cloud linting? -— Currently, the dbt Cloud IDE can only lint or fix files up to a certain size and complexity. If you attempt to lint or fix files that are too large (which would take more than 60 seconds for the dbt Cloud backend to process), you will see an 'Unable to complete linting this file' error. Please break up your model into smaller models (files) so that they are less complex to lint or fix. Note that linting is less complex than fixing so you may run into a scenario where a file can be linted but not fixed. -
+ +Currently, the dbt Cloud IDE can lint or fix files up to a certain size and complexity. If you attempt to lint or fix files that are too large, taking more than 60 seconds for the dbt Cloud backend to process, you will see an 'Unable to complete linting this file' error. To avoid this, break up your model into smaller models (files) so that they are less complex to lint or fix. Note that linting is simpler than fixing so there may be cases where a file can be linted but not fixed. + + ## Related docs From e1ff8e2a2b0dc46ac1cb9b401e67ec87d4b780a4 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 8 Jan 2024 10:15:33 +0000 Subject: [PATCH 201/204] Update lint-format.md --- website/docs/docs/cloud/dbt-cloud-ide/lint-format.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md index 4c565596804..98ec266d05f 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md @@ -210,18 +210,17 @@ To ensure optimal code quality, consistent code, and styles — it's highly However, you can customize and include an additional child `.sqlfluff` configuration file within specific subfolders of your dbt project.

By nesting a `.sqlfluff` file in a subfolder, SQLFluff will apply the rules defined in that subfolder's configuration file to any files located within it. The rules specified in the parent `.sqlfluff` file will be used for all other files and folders outside of the subfolder. This hierarchical approach allows for tailored linting rules while maintaining consistency throughout your project. Refer to [SQLFluff documentation](https://docs.sqlfluff.com/en/stable/configuration.html#configuration-files) for more info. - + Currently, running SQLFluff commands from the terminal isn't supported. - + -
-Why am I unable to see the Lint or Format button? + Make sure you're on a development branch. Formatting or Linting isn't available on "main" or "read-only" branches. - + - Double-check that your SQLFluff version matches the one in dbt Cloud IDE (found in the Code Quality tab after a lint operation).

From 9704965ae8472f1e3ba1df6c1ab1130d259082cd Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 8 Jan 2024 10:24:44 +0000 Subject: [PATCH 202/204] Update lint-format.md --- website/docs/docs/cloud/dbt-cloud-ide/lint-format.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md index 98ec266d05f..28d4f4d6a1a 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md @@ -204,7 +204,7 @@ You can use either SQLFluff or sqlfmt depending on your preference and what work
- + To ensure optimal code quality, consistent code, and styles — it's highly recommended you have one main `.sqlfluff` configuration file in the root folder of your project. Having multiple files can result in various different SQL styles in your project.

@@ -217,12 +217,12 @@ However, you can customize and include an additional child `.sqlfluff` configura Currently, running SQLFluff commands from the terminal isn't supported.
- + Make sure you're on a development branch. Formatting or Linting isn't available on "main" or "read-only" branches. - + - Double-check that your SQLFluff version matches the one in dbt Cloud IDE (found in the Code Quality tab after a lint operation).

- If your lint operation passes despite clear rule violations, confirm you're not linting models with ephemeral models. Linting doesn't support ephemeral models in dbt v1.5 and lower.
From fd25f896f5df09059e6065320cb041107a935894 Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 8 Jan 2024 10:25:13 +0000 Subject: [PATCH 203/204] Update website/docs/docs/cloud/dbt-cloud-ide/lint-format.md --- website/docs/docs/cloud/dbt-cloud-ide/lint-format.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md index 28d4f4d6a1a..0909e0d79ed 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md @@ -228,7 +228,9 @@ Make sure you're on a development branch. Formatting or Linting isn't available
-Currently, the dbt Cloud IDE can lint or fix files up to a certain size and complexity. If you attempt to lint or fix files that are too large, taking more than 60 seconds for the dbt Cloud backend to process, you will see an 'Unable to complete linting this file' error. To avoid this, break up your model into smaller models (files) so that they are less complex to lint or fix. Note that linting is simpler than fixing so there may be cases where a file can be linted but not fixed. +Currently, the dbt Cloud IDE can lint or fix files up to a certain size and complexity. If you attempt to lint or fix files that are too large, taking more than 60 seconds for the dbt Cloud backend to process, you will see an 'Unable to complete linting this file' error. + +To avoid this, break up your model into smaller models (files) so that they are less complex to lint or fix. Note that linting is simpler than fixing so there may be cases where a file can be linted but not fixed. From c8b3a4f83b84a75f9250fb3129aabd6ab543ea6f Mon Sep 17 00:00:00 2001 From: mirnawong1 <89008547+mirnawong1@users.noreply.github.com> Date: Mon, 8 Jan 2024 10:41:34 +0000 Subject: [PATCH 204/204] Update lint-format.md --- website/docs/docs/cloud/dbt-cloud-ide/lint-format.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md index 0909e0d79ed..f6f2265a922 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md @@ -217,7 +217,7 @@ However, you can customize and include an additional child `.sqlfluff` configura Currently, running SQLFluff commands from the terminal isn't supported.
- + Make sure you're on a development branch. Formatting or Linting isn't available on "main" or "read-only" branches.