Skip to content

Commit

Permalink
This branch was auto-updated!
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] authored Oct 13, 2023
2 parents 865814e + 6cccc16 commit 4fa050a
Show file tree
Hide file tree
Showing 3 changed files with 24 additions and 17 deletions.
33 changes: 20 additions & 13 deletions website/docs/docs/dbt-cloud-apis/sl-jdbc.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ description: "Integrate and use the JDBC API to query your metrics."
tags: [Semantic Layer, API]
---


<VersionBlock lastVersion="1.5">

import LegacyInfo from '/snippets/_legacy-sl-callout.md';
Expand Down Expand Up @@ -59,11 +58,13 @@ jdbc:arrow-flight-sql://semantic-layer.cloud.getdbt.com:443?&environmentId=20233

## Querying the API for metric metadata

The Semantic Layer JDBC API has built-in metadata calls which can provide a user with information about their metrics and dimensions. Here are some metadata commands and examples:
The Semantic Layer JDBC API has built-in metadata calls which can provide a user with information about their metrics and dimensions.

Refer to the following tabs for metadata commands and examples:

<Tabs>

<TabItem value="allmetrics" label="Fetch all defined metrics">
<TabItem value="allmetrics" label="Fetch defined metrics">

Use this query to fetch all defined metrics in your dbt project:

Expand All @@ -74,7 +75,7 @@ select * from {{
```
</TabItem>
<TabItem value="alldimensions" label="Fetch all dimensions for a metric">
<TabItem value="alldimensions" label="Fetch dimensions for a metric">
Use this query to fetch all dimensions for a metric.
Expand All @@ -87,7 +88,7 @@ select * from {{
</TabItem>
<TabItem value="dimensionvalueformetrics" label="Fetch dimension values metrics">
<TabItem value="dimensionvalueformetrics" label="Fetch dimension values">
Use this query to fetch dimension values for one or multiple metrics and single dimension.
Expand All @@ -100,7 +101,7 @@ semantic_layer.dimension_values(metrics=['food_order_amount'], group_by=['custom
</TabItem>
<TabItem value="queryablegranularitiesformetrics" label="Fetch queryable primary time granularities for metrics">
<TabItem value="queryablegranularitiesformetrics" label="Fetch queryable granularities for metrics">
Use this query to fetch queryable granularities for a list of metrics. This API request allows you to only show the time granularities that make sense for the primary time dimension of the metrics (such as `metric_time`), but if you want queryable granularities for other time dimensions, you can use the `dimensions()` call, and find the column queryable_granularities.
Expand All @@ -113,6 +114,9 @@ select * from {{
</TabItem>
</Tabs>
<Tabs>
<TabItem value="metricsfordimensions" label="Fetch available metrics given dimensions">
Expand Down Expand Up @@ -144,9 +148,10 @@ select NAME, QUERYABLE_GRANULARITIES from {{
</TabItem>
<TabItem value="fetchprimarytimedimensionnames" label="Determine what time dimension(s) make up metric_time for your metric(s)">
<TabItem value="fetchprimarytimedimensionnames" label="Fetch primary time dimension names">
It may be useful in your application to expose the names of the time dimensions that represent `metric_time` or the common thread across all metrics.
You can first query the `metrics()` argument to fetch a list of measures, then use the `measures()` call which will return the name(s) of the time dimensions that make up metric time.
```bash
Expand All @@ -167,12 +172,13 @@ To query metric values, here are the following parameters that are available:
| `metrics` | The metric name as defined in your dbt metric configuration | `metrics=['revenue']` | Required |
| `group_by` | Dimension names or entities to group by. We require a reference to the entity of the dimension (other than for the primary time dimension), which is pre-appended to the front of the dimension name with a double underscore. | `group_by=['user__country', 'metric_time']` | Optional |
| `grain` | A parameter specific to any time dimension and changes the grain of the data from the default for the metric. | `group_by=[Dimension('metric_time')` <br/> `grain('week\|day\|month\|quarter\|year')]` | Optional |
| `where` | A where clause that allows you to filter on dimensions and entities using parameters - comes with `TimeDimension`, `Dimension`, and `Entity` objects. Granularity is required with `TimeDimension` | `"{{ where=Dimension('customer__country') }} = 'US')"` | Optional |
| `where` | A where clause that allows you to filter on dimensions and entities using parameters. This takes a filter list OR string. Inputs come with `Dimension`, and `Entity` objects. Granularity is required if the `Dimension` is a time dimension | `"{{ where=Dimension('customer__country') }} = 'US')"` | Optional |
| `limit` | Limit the data returned | `limit=10` | Optional |
|`order` | Order the data returned | `order_by=['-order_gross_profit']` (remove `-` for ascending order) | Optional |
| `compile` | If true, returns generated SQL for the data platform but does not execute | `compile=True` | Optional |
## Note on time dimensions and `metric_time`
You will notice that in the list of dimensions for all metrics, there is a dimension called `metric_time`. `Metric_time` is a reserved keyword for the measure-specific aggregation time dimensions. For any time-series metric, the `metric_time` keyword should always be available for use in queries. This is a common dimension across *all* metrics in a semantic graph.
Expand Down Expand Up @@ -246,13 +252,13 @@ select * from {{
Where filters in API allow for a filter list or string. We recommend using the filter list for production applications as this format will realize all benefits from the <Term id="predicate-pushdown" /> where possible.
Where filters have the following components that you can use:
Where Filters have a few objects that you can use:
- `Dimension()` - This is used for any categorical or time dimensions. If used for a time dimension, granularity is required - `Dimension('metric_time').grain('week')` or `Dimension('customer__country')`
- `TimeDimension()` - This is used for all time dimensions and requires a granularity argument - `TimeDimension('metric_time', 'MONTH)`
- `Entity()` - Used for entities like primary and foreign keys - `Entity('order_id')`
- `Entity()` - This is used for entities like primary and foreign keys - `Entity('order_id')`
Note: If you prefer a more explicit path to create the `where` clause, you can optionally use the `TimeDimension` feature. This helps separate out categorical dimensions from time-related ones. The `TimeDimesion` input takes the time dimension name and also requires granularity, like this: `TimeDimension('metric_time', 'MONTH')`.
Use the following example to query using a `where` filter with the string format:
Expand All @@ -261,7 +267,7 @@ Use the following example to query using a `where` filter with the string format
select * from {{
semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
group_by=[Dimension('metric_time').grain('month'),'customer__customer_type'],
where="{{ TimeDimension('metric_time', 'MONTH') }} >= '2017-03-09' AND {{ Dimension('customer__customer_type' }} in ('new') AND {{ Entity('order_id') }} = 10")
where="{{ Dimension('metric_time').grain('month') }} >= '2017-03-09' AND {{ Dimension('customer__customer_type' }} in ('new') AND {{ Entity('order_id') }} = 10")
}}
```
Expand All @@ -271,7 +277,7 @@ Use the following example to query using a `where` filter with a filter list for
select * from {{
semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
group_by=[Dimension('metric_time').grain('month'),'customer__customer_type'],
where=[{{ TimeDimension('metric_time', 'MONTH')}} >= '2017-03-09', {{ Dimension('customer__customer_type' }} in ('new'), {{ Entity('order_id') }} = 10])
where=[{{ Dimension('metric_time').grain('month') }} >= '2017-03-09', {{ Dimension('customer__customer_type' }} in ('new'), {{ Entity('order_id') }} = 10])
}}
```
Expand All @@ -287,6 +293,7 @@ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'],
order_by=['order_gross_profit'])
}}
```
### Query with compile keyword
Use the following example to query using a `compile` keyword:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@ import AvailIntegrations from '/snippets/_sl-partner-links.md';

## Custom integration

- You can create custom integrations using different languages and tools. We support connecting with JDBC, ADBC, and a GraphQL APIs. For more info, check out [our examples on GitHub](https://github.com/dbt-labs/example-semantic-layer-clients/).
- You can create custom integrations using different languages and tools. We support connecting with JDBC, ADBC, and GraphQL APIs. For more info, check out [our examples on GitHub](https://github.com/dbt-labs/example-semantic-layer-clients/).
- You can also connect to tools that allow you to write SQL. These tools must meet one of the two criteria:

- Supports a generic JDBC driver option (such as DataGrip) or
- Supports Dremio and uses ArrowFlightSQL driver version 12.0.0 or higher.
- Uses Arrow Flight SQL JDBC driver version 12.0.0 or higher.

## Related docs

Expand Down
4 changes: 2 additions & 2 deletions website/docs/guides/migration/sl-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,8 @@ This step is only relevant to users who want the legacy and new semantic layer t
1. Create a new deployment environment in dbt Cloud and set the dbt version to 1.6 or higher.
2. Choose `Only run on a custom branch` and point to the branch that has the updated metric definition
3. Set the deployment schema to a temporary migration schema, such as `tmp_sl_migration`. Optional, you can create a new database for the migration.
4. Create a job to parse your project, such as `dbt parse`, and run it. Make sure this job succeeds, There needs to be a successful job in your environment in order to set up the semantic layer
5. In Account Settings > Projects > Project details click `Configure the Semantic Layer`. Under **Environment**select the deployment environment you created in the previous step. Save your configuration.
4. Create a job to parse your project, such as `dbt parse`, and run it. Make sure this job succeeds, there needs to be a successful job in your environment in order to set up the semantic layer
5. In Account Settings > Projects > Project details click `Configure the Semantic Layer`. Under **Environment**, select the deployment environment you created in the previous step. Save your configuration.
6. In the Project details page, click `Generate service token` and grant it `Semantic Layer Only` and `Metadata Only` permissions. Save this token securely - you will need it to connect to the semantic layer.
At this point, both the new semantic layer and the old semantic layer will be running. The new semantic layer will be pointing at your migration branch with the updated metrics definitions.
Expand Down

0 comments on commit 4fa050a

Please sign in to comment.