diff --git a/website/docs/docs/build/measures.md b/website/docs/docs/build/measures.md
index e06b5046976..74d37b70e94 100644
--- a/website/docs/docs/build/measures.md
+++ b/website/docs/docs/build/measures.md
@@ -6,19 +6,13 @@ sidebar_label: "Measures"
tags: [Metrics, Semantic Layer]
---
-Measures are aggregations performed on columns in your model. They can be used as final metrics or serve as building blocks for more complex metrics. Measures have several inputs, which are described in the following table along with their field types.
-
-| Parameter | Description | Type |
-| --------- | ----------- | ---- |
-| [`name`](#name) | Provide a name for the measure, which must be unique and can't be repeated across all semantic models in your dbt project. | Required |
-| [`description`](#description) | Describes the calculated measure. | Optional |
-| [`agg`](#aggregation) | dbt supports aggregations such as `sum`, `min`, `max`, and more. Refer to [Aggregation](/docs/build/measures#aggregation) for the full list of supported aggregation types. | Required |
-| [`expr`](#expr) | You can either reference an existing column in the table or use a SQL expression to create or derive a new one. | Optional |
-| [`non_additive_dimension`](#non-additive-dimensions) | Non-additive dimensions can be specified for measures that cannot be aggregated over certain dimensions, such as bank account balances, to avoid producing incorrect results. | Optional |
-| `agg_params` | specific aggregation properties such as a percentile. | Optional |
-| `agg_time_dimension` | The time field. Defaults to the default agg time dimension for the semantic model. | Optional |
-| `label` | How the metric appears in project docs and downstream integrations. | Required |
+Measures are aggregations performed on columns in your model. They can be used as final metrics or serve as building blocks for more complex metrics.
+Measures have several inputs, which are described in the following table along with their field types.
+
+import MeasuresParameters from '/snippets/_sl-measures-parameters.md';
+
+
## Measure spec
diff --git a/website/docs/docs/build/semantic-models.md b/website/docs/docs/build/semantic-models.md
index 99ccef237f9..09f808d7a17 100644
--- a/website/docs/docs/build/semantic-models.md
+++ b/website/docs/docs/build/semantic-models.md
@@ -40,17 +40,17 @@ The complete spec for semantic models is below:
```yaml
semantic_models:
- - name: the_name_of_the_semantic_model ## Required
- description: same as always ## Optional
- model: ref('some_model') ## Required
- defaults: ## Required
- agg_time_dimension: dimension_name ## Required if the model contains dimensions
- entities: ## Required
- - see more information in entities
- measures: ## Optional
- - see more information in measures section
- dimensions: ## Required
- - see more information in dimensions section
+ - name: the_name_of_the_semantic_model ## Required
+ description: same as always ## Optional
+ model: ref('some_model') ## Required
+ default: ## Required
+ agg_time_dimension: dimension_name ## Required if the model contains dimensions
+ entities: ## Required
+ - see more information in entities
+ measures: ## Optional
+ - see more information in measures section
+ dimensions: ## Required
+ - see more information in dimensions section
primary_entity: >-
if the semantic model has no primary entity, then this property is required. #Optional if a primary entity exists, otherwise Required
```
@@ -230,16 +230,14 @@ For semantic models with a measure, you must have a [primary time group](/docs/b
### Measures
-[Measures](/docs/build/measures) are aggregations applied to columns in your data model. They can be used as the foundational building blocks for more complex metrics, or be the final metric itself. Measures have various parameters which are listed in a table along with their descriptions and types.
+[Measures](/docs/build/measures) are aggregations applied to columns in your data model. They can be used as the foundational building blocks for more complex metrics, or be the final metric itself.
+
+Measures have various parameters which are listed in a table along with their descriptions and types.
+
+import MeasuresParameters from '/snippets/_sl-measures-parameters.md';
+
+
-| Parameter | Description | Field type |
-| --- | --- | --- |
-| `name`| Provide a name for the measure, which must be unique and can't be repeated across all semantic models in your dbt project. | Required |
-| `description` | Describes the calculated measure. | Optional |
-| `agg` | dbt supports the following aggregations: `sum`, `max`, `min`, `count_distinct`, and `sum_boolean`. | Required |
-| `expr` | You can either reference an existing column in the table or use a SQL expression to create or derive a new one. | Optional |
-| `non_additive_dimension` | Non-additive dimensions can be specified for measures that cannot be aggregated over certain dimensions, such as bank account balances, to avoid producing incorrect results. | Optional |
-| `create_metric` | You can create a metric directly from a measure with `create_metric: True` and specify its display name with create_metric_display_name. Default is false. | Optional |
import SetUpPages from '/snippets/_metrics-dependencies.md';
diff --git a/website/docs/docs/collaborate/govern/model-versions.md b/website/docs/docs/collaborate/govern/model-versions.md
index 49ed65f9a36..2a79e2f46e7 100644
--- a/website/docs/docs/collaborate/govern/model-versions.md
+++ b/website/docs/docs/collaborate/govern/model-versions.md
@@ -393,6 +393,32 @@ dbt.exceptions.AmbiguousAliasError: Compilation Error
We opted to use `generate_alias_name` for this functionality so that the logic remains accessible to end users, and could be reimplemented with custom logic.
:::
+### Run a model with multiple versions
+
+To run a model with multiple versions, you can use the [`--select` flag](/reference/node-selection/syntax). For example:
+
+- Run all versions of `dim_customers`:
+
+ ```bash
+ dbt run --select dim_customers # Run all versions of the model
+ ```
+- Run only version 2 of `dim_customers`:
+
+ You can use either of the following commands (both achieve the same result):
+
+ ```bash
+ dbt run --select dim_customers.v2 # Run a specific version of the model
+ dbt run --select dim_customers_v2 # Alternative syntax for the specific version
+ ```
+
+- Run the latest version of `dim_customers` using the `--select` flag shorthand:
+
+ ```bash
+ dbt run -s dim_customers version:latest # Run the latest version of the model
+ ```
+
+These commands provide flexibility in managing and executing different versions of a dbt model.
+
### Optimizing model versions
How you define each model version is completely up to you. While it's easy to start by copy-pasting from one model's SQL definition into another, you should think about _what actually is changing_ from one version to another.
diff --git a/website/docs/docs/core/connect-data-platform/vertica-setup.md b/website/docs/docs/core/connect-data-platform/vertica-setup.md
index fbb8de6b301..9274c22ebbe 100644
--- a/website/docs/docs/core/connect-data-platform/vertica-setup.md
+++ b/website/docs/docs/core/connect-data-platform/vertica-setup.md
@@ -6,9 +6,9 @@ meta:
authors: 'Vertica (Former authors: Matthew Carter, Andy Regan, Andrew Hedengren)'
github_repo: 'vertica/dbt-vertica'
pypi_package: 'dbt-vertica'
- min_core_version: 'v1.4.0 and newer'
+ min_core_version: 'v1.6.0 and newer'
cloud_support: 'Not Supported'
- min_supported_version: 'Vertica 12.0.0'
+ min_supported_version: 'Vertica 23.4.0'
slack_channel_name: 'n/a'
slack_channel_link: 'https://www.getdbt.com/community/'
platform_name: 'Vertica'
diff --git a/website/docs/docs/dbt-cloud-apis/sl-graphql.md b/website/docs/docs/dbt-cloud-apis/sl-graphql.md
index f73007c9a02..b7d13d0d453 100644
--- a/website/docs/docs/dbt-cloud-apis/sl-graphql.md
+++ b/website/docs/docs/dbt-cloud-apis/sl-graphql.md
@@ -48,7 +48,7 @@ Authentication uses a dbt Cloud [service account tokens](/docs/dbt-cloud-apis/se
{"Authorization": "Bearer "}
```
-Each GQL request also requires a dbt Cloud `environmentId`. The API uses both the service token in the header and environmentId for authentication.
+Each GQL request also requires a dbt Cloud `environmentId`. The API uses both the service token in the header and `environmentId` for authentication.
### Metadata calls
@@ -150,6 +150,60 @@ metricsForDimensions(
): [Metric!]!
```
+**Metric Types**
+
+```graphql
+Metric {
+ name: String!
+ description: String
+ type: MetricType!
+ typeParams: MetricTypeParams!
+ filter: WhereFilter
+ dimensions: [Dimension!]!
+ queryableGranularities: [TimeGranularity!]!
+}
+```
+
+```
+MetricType = [SIMPLE, RATIO, CUMULATIVE, DERIVED]
+```
+
+**Metric Type parameters**
+
+```graphql
+MetricTypeParams {
+ measure: MetricInputMeasure
+ inputMeasures: [MetricInputMeasure!]!
+ numerator: MetricInput
+ denominator: MetricInput
+ expr: String
+ window: MetricTimeWindow
+ grainToDate: TimeGranularity
+ metrics: [MetricInput!]
+}
+```
+
+
+**Dimension Types**
+
+```graphql
+Dimension {
+ name: String!
+ description: String
+ type: DimensionType!
+ typeParams: DimensionTypeParams
+ isPartition: Boolean!
+ expr: String
+ queryableGranularities: [TimeGranularity!]!
+}
+```
+
+```
+DimensionType = [CATEGORICAL, TIME]
+```
+
+### Querying
+
**Create Dimension Values query**
```graphql
@@ -205,59 +259,128 @@ query(
): QueryResult!
```
-**Metric Types**
+The GraphQL API uses a polling process for querying since queries can be long-running in some cases. It works by first creating a query with a mutation, `createQuery, which returns a query ID. This ID is then used to continuously check (poll) for the results and status of your query. The typical flow would look as follows:
+1. Kick off a query
```graphql
-Metric {
- name: String!
- description: String
- type: MetricType!
- typeParams: MetricTypeParams!
- filter: WhereFilter
- dimensions: [Dimension!]!
- queryableGranularities: [TimeGranularity!]!
+mutation {
+ createQuery(
+ environmentId: 123456
+ metrics: [{name: "order_total"}]
+ groupBy: [{name: "metric_time"}]
+ ) {
+ queryId # => Returns 'QueryID_12345678'
+ }
}
```
-
-```
-MetricType = [SIMPLE, RATIO, CUMULATIVE, DERIVED]
+2. Poll for results
+```graphql
+{
+ query(environmentId: 123456, queryId: "QueryID_12345678") {
+ sql
+ status
+ error
+ totalPages
+ jsonResult
+ arrowResult
+ }
+}
```
+3. Keep querying 2. at an appropriate interval until status is `FAILED` or `SUCCESSFUL`
+
+### Output format and pagination
+
+**Output format**
+
+By default, the output is in Arrow format. You can switch to JSON format using the following parameter. However, due to performance limitations, we recommend using the JSON parameter for testing and validation. The JSON received is a base64 encoded string. To access it, you can decode it using a base64 decoder. The JSON is created from pandas, which means you can change it back to a dataframe using `pandas.read_json(json, orient="table")`. Or you can work with the data directly using `json["data"]`, and find the table schema using `json["schema"]["fields"]`. Alternatively, you can pass `encoded:false` to the jsonResult field to get a raw JSON string directly.
-**Metric Type parameters**
```graphql
-MetricTypeParams {
- measure: MetricInputMeasure
- inputMeasures: [MetricInputMeasure!]!
- numerator: MetricInput
- denominator: MetricInput
- expr: String
- window: MetricTimeWindow
- grainToDate: TimeGranularity
- metrics: [MetricInput!]
+{
+ query(environmentId: BigInt!, queryId: Int!, pageNum: Int! = 1) {
+ sql
+ status
+ error
+ totalPages
+ arrowResult
+ jsonResult(orient: PandasJsonOrient! = TABLE, encoded: Boolean! = true)
+ }
}
```
+The results default to the table but you can change it to any [pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_json.html) supported value.
-**Dimension Types**
+**Pagination**
-```graphql
-Dimension {
- name: String!
- description: String
- type: DimensionType!
- typeParams: DimensionTypeParams
- isPartition: Boolean!
- expr: String
- queryableGranularities: [TimeGranularity!]!
+By default, we return 1024 rows per page. If your result set exceeds this, you need to increase the page number using the `pageNum` option.
+
+### Run a Python query
+
+The `arrowResult` in the GraphQL query response is a byte dump, which isn't visually useful. You can convert this byte data into an Arrow table using any Arrow-supported language. Refer to the following Python example explaining how to query and decode the arrow result:
+
+
+```python
+import base64
+import pyarrow as pa
+import time
+
+headers = {"Authorization":"Bearer "}
+query_result_request = """
+{
+ query(environmentId: 70, queryId: "12345678") {
+ sql
+ status
+ error
+ arrowResult
+ }
}
-```
+"""
-```
-DimensionType = [CATEGORICAL, TIME]
+while True:
+ gql_response = requests.post(
+ "https://semantic-layer.cloud.getdbt.com/api/graphql",
+ json={"query": query_result_request},
+ headers=headers,
+ )
+ if gql_response.json()["data"]["status"] in ["FAILED", "SUCCESSFUL"]:
+ break
+ # Set an appropriate interval between polling requests
+ time.sleep(1)
+
+"""
+gql_response.json() =>
+{
+ "data": {
+ "query": {
+ "sql": "SELECT\n ordered_at AS metric_time__day\n , SUM(order_total) AS order_total\nFROM semantic_layer.orders orders_src_1\nGROUP BY\n ordered_at",
+ "status": "SUCCESSFUL",
+ "error": null,
+ "arrowResult": "arrow-byte-data"
+ }
+ }
+}
+"""
+
+def to_arrow_table(byte_string: str) -> pa.Table:
+ """Get a raw base64 string and convert to an Arrow Table."""
+ with pa.ipc.open_stream(base64.b64decode(res)) as reader:
+ return pa.Table.from_batches(reader, reader.schema)
+
+
+arrow_table = to_arrow_table(gql_response.json()["data"]["query"]["arrowResult"])
+
+# Perform whatever functionality is available, like convert to a pandas table.
+print(arrow_table.to_pandas())
+"""
+order_total ordered_at
+ 3 2023-08-07
+ 112 2023-08-08
+ 12 2023-08-09
+ 5123 2023-08-10
+"""
```
-### Create Query examples
+### Additional Create Query examples
The following section provides query examples for the GraphQL API, such as how to query metrics, dimensions, where filters, and more.
@@ -359,7 +482,7 @@ mutation {
}
```
-**Query with Explain**
+**Query with just compiling SQL**
This takes the same inputs as the `createQuery` mutation.
@@ -374,89 +497,3 @@ mutation {
}
}
```
-
-### Output format and pagination
-
-**Output format**
-
-By default, the output is in Arrow format. You can switch to JSON format using the following parameter. However, due to performance limitations, we recommend using the JSON parameter for testing and validation. The JSON received is a base64 encoded string. To access it, you can decode it using a base64 decoder. The JSON is created from pandas, which means you can change it back to a dataframe using `pandas.read_json(json, orient="table")`. Or you can work with the data directly using `json["data"]`, and find the table schema using `json["schema"]["fields"]`. Alternatively, you can pass `encoded:false` to the jsonResult field to get a raw JSON string directly.
-
-
-```graphql
-{
- query(environmentId: BigInt!, queryId: Int!, pageNum: Int! = 1) {
- sql
- status
- error
- totalPages
- arrowResult
- jsonResult(orient: PandasJsonOrient! = TABLE, encoded: Boolean! = true)
- }
-}
-```
-
-The results default to the table but you can change it to any [pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_json.html) supported value.
-
-**Pagination**
-
-By default, we return 1024 rows per page. If your result set exceeds this, you need to increase the page number using the `pageNum` option.
-
-### Run a Python query
-
-The `arrowResult` in the GraphQL query response is a byte dump, which isn't visually useful. You can convert this byte data into an Arrow table using any Arrow-supported language. Refer to the following Python example explaining how to query and decode the arrow result:
-
-
-```python
-import base64
-import pyarrow as pa
-
-headers = {"Authorization":"Bearer "}
-query_result_request = """
-{
- query(environmentId: 70, queryId: "12345678") {
- sql
- status
- error
- arrowResult
- }
-}
-"""
-
-gql_response = requests.post(
- "https://semantic-layer.cloud.getdbt.com/api/graphql",
- json={"query": query_result_request},
- headers=headers,
-)
-
-"""
-gql_response.json() =>
-{
- "data": {
- "query": {
- "sql": "SELECT\n ordered_at AS metric_time__day\n , SUM(order_total) AS order_total\nFROM semantic_layer.orders orders_src_1\nGROUP BY\n ordered_at",
- "status": "SUCCESSFUL",
- "error": null,
- "arrowResult": "arrow-byte-data"
- }
- }
-}
-"""
-
-def to_arrow_table(byte_string: str) -> pa.Table:
- """Get a raw base64 string and convert to an Arrow Table."""
- with pa.ipc.open_stream(base64.b64decode(res)) as reader:
- return pa.Table.from_batches(reader, reader.schema)
-
-
-arrow_table = to_arrow_table(gql_response.json()["data"]["query"]["arrowResult"])
-
-# Perform whatever functionality is available, like convert to a pandas table.
-print(arrow_table.to_pandas())
-"""
-order_total ordered_at
- 3 2023-08-07
- 112 2023-08-08
- 12 2023-08-09
- 5123 2023-08-10
-"""
-```
diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md
index c0c9a30db36..079e2018982 100644
--- a/website/docs/docs/supported-data-platforms.md
+++ b/website/docs/docs/supported-data-platforms.md
@@ -41,6 +41,3 @@ The following are **Trusted adapters** ✓ you can connect to in dbt Core:
import AdaptersTrusted from '/snippets/_adapters-trusted.md';
-
- * Install these adapters using dbt Core as they're not currently supported in dbt Cloud.
-
diff --git a/website/docs/docs/trusted-adapters.md b/website/docs/docs/trusted-adapters.md
index 20d61f69575..7b7af7d0790 100644
--- a/website/docs/docs/trusted-adapters.md
+++ b/website/docs/docs/trusted-adapters.md
@@ -25,12 +25,12 @@ Refer to the [Build, test, document, and promote adapters](/guides/adapter-creat
### Trusted vs Verified
-The Verification program exists to highlight adapters that meets both of the following criteria:
+The Verification program exists to highlight adapters that meet both of the following criteria:
- the guidelines given in the Trusted program,
- formal agreements required for integration with dbt Cloud
-For more information on the Verified Adapter program, reach out the [dbt Labs partnerships team](mailto:partnerships@dbtlabs.com)
+For more information on the Verified Adapter program, reach out to the [dbt Labs partnerships team](mailto:partnerships@dbtlabs.com)
### Trusted adapters
diff --git a/website/docs/reference/resource-properties/latest_version.md b/website/docs/reference/resource-properties/latest_version.md
index 4c531879598..567ea5e7e1f 100644
--- a/website/docs/reference/resource-properties/latest_version.md
+++ b/website/docs/reference/resource-properties/latest_version.md
@@ -25,6 +25,8 @@ The latest version of this model. The "latest" version is relevant for:
This value can be a string or a numeric (integer or float) value. It must be one of the [version identifiers](/reference/resource-properties/versions#v) specified in this model's list of `versions`.
+To run the latest version of a model, you can use the [`--select` flag](/reference/node-selection/syntax). Refer to [Model versions](/docs/collaborate/govern/model-versions#run-a-model-with-multiple-versions) for more information and syntax.
+
## Default
If not specified for a versioned model, `latest_version` defaults to the largest [version identifier](/reference/resource-properties/versions#v): numerically greatest (if all version identifiers are numeric), otherwise the alphabetically last (if they are strings).
diff --git a/website/docs/reference/resource-properties/versions.md b/website/docs/reference/resource-properties/versions.md
index 86e9abf34a8..5dba70c6e6e 100644
--- a/website/docs/reference/resource-properties/versions.md
+++ b/website/docs/reference/resource-properties/versions.md
@@ -43,6 +43,9 @@ The value of the version identifier is used to order versions of a model relativ
In general, we recommend that you use a simple "major versioning" scheme for your models: `1`, `2`, `3`, and so on, where each version reflects a breaking change from previous versions. You are able to use other versioning schemes. dbt will sort your version identifiers alphabetically if the values are not all numeric. You should **not** include the letter `v` in the version identifier, as dbt will do that for you.
+To run a model with multiple versions, you can use the [`--select` flag](/reference/node-selection/syntax). Refer to [Model versions](/docs/collaborate/govern/model-versions#run-a-model-with-multiple-version) for more information and syntax.
+
+
### `defined_in`
The name of the model file (excluding the file extension, e.g. `.sql` or `.py`) where the model version is defined.
diff --git a/website/snippets/_adapters-trusted.md b/website/snippets/_adapters-trusted.md
index 7747ce16dec..20984253c32 100644
--- a/website/snippets/_adapters-trusted.md
+++ b/website/snippets/_adapters-trusted.md
@@ -1,18 +1,23 @@
+
+
diff --git a/website/snippets/_adapters-verified.md b/website/snippets/_adapters-verified.md
index ebb91cb4544..b9a71c67c36 100644
--- a/website/snippets/_adapters-verified.md
+++ b/website/snippets/_adapters-verified.md
@@ -15,7 +15,7 @@
icon="databricks"/>
@@ -49,11 +49,11 @@
body="Set up in dbt Cloud Install with dbt Core
"
icon="fabric"/>
diff --git a/website/snippets/_sl-measures-parameters.md b/website/snippets/_sl-measures-parameters.md
new file mode 100644
index 00000000000..4bd32311fda
--- /dev/null
+++ b/website/snippets/_sl-measures-parameters.md
@@ -0,0 +1,12 @@
+| Parameter | Description |
+| --- | --- | --- |
+| [`name`](/docs/build/measures#name) | Provide a name for the measure, which must be unique and can't be repeated across all semantic models in your dbt project. | Required |
+| [`description`](/docs/build/measures#description) | Describes the calculated measure. | Optional |
+| [`agg`](/docs/build/measures#description) | dbt supports the following aggregations: `sum`, `max`, `min`, `count_distinct`, and `sum_boolean`. | Required |
+| [`expr`](/docs/build/measures#expr) | Either reference an existing column in the table or use a SQL expression to create or derive a new one. | Optional |
+| [`non_additive_dimension`](/docs/build/measures#non-additive-dimensions) | Non-additive dimensions can be specified for measures that cannot be aggregated over certain dimensions, such as bank account balances, to avoid producing incorrect results. | Optional |
+| `agg_params` | Specific aggregation properties such as a percentile. | Optional |
+| `agg_time_dimension` | The time field. Defaults to the default agg time dimension for the semantic model. | Optional | 1.6 and higher |
+| `label`* | How the metric appears in project docs and downstream integrations. | Required |
+| `create_metric`* | You can create a metric directly from a measure with `create_metric: True` and specify its display name with `create_metric_display_name`. | Optional |
+*Available on dbt version 1.7 or higher.
diff --git a/website/static/img/icons/glue.svg b/website/static/img/icons/glue.svg
new file mode 100644
index 00000000000..a120fc03b3b
--- /dev/null
+++ b/website/static/img/icons/glue.svg
@@ -0,0 +1,26 @@
+
\ No newline at end of file
diff --git a/website/static/img/icons/white/glue.svg b/website/static/img/icons/white/glue.svg
new file mode 100644
index 00000000000..a120fc03b3b
--- /dev/null
+++ b/website/static/img/icons/white/glue.svg
@@ -0,0 +1,26 @@
+
\ No newline at end of file