Skip to content

Commit

Permalink
Merge pull request #82 from fivetran/releases/v0.12.0
Browse files Browse the repository at this point in the history
Release v0.12.
  • Loading branch information
fivetran-avinash authored Feb 8, 2023
2 parents 8c4a41d + c9c632a commit 1053006
Show file tree
Hide file tree
Showing 15 changed files with 105 additions and 36 deletions.
14 changes: 14 additions & 0 deletions .buildkite/pipeline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,17 @@ steps:
- "CI_REDSHIFT_DBT_USER"
commands: |
bash .buildkite/scripts/run_models.sh redshift
- label: ":databricks: Run Tests - Databricks"
key: "run_dbt_databricks"
plugins:
- docker#v3.13.0:
image: "python:3.8"
shell: [ "/bin/bash", "-e", "-c" ]
environment:
- "BASH_ENV=/tmp/.bashrc"
- "CI_DATABRICKS_DBT_HOST"
- "CI_DATABRICKS_DBT_HTTP_PATH"
- "CI_DATABRICKS_DBT_TOKEN"
commands: |
bash .buildkite/scripts/run_models.sh databricks
7 changes: 7 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
# dbt_jira v0.12.1
## 🚨 Breaking Changes 🚨:
- Fixed `jira__daily_issue_field_history` model to make sure component values are correctly joined into our issue models ([#81](https://github.com/fivetran/dbt_jira/pull/81)).
- Please note, a `dbt run --full-refresh` will be required after upgrading to this version in order to capture the updates.
## 🎉 Feature Updates 🎉
- Databricks compatibility 🧱 ([#80](https://github.com/fivetran/dbt_jira/pull/80)).

# dbt_jira v0.11.0
## 🚨 Breaking Changes 🚨:
[PR #74](https://github.com/fivetran/dbt_jira/pull/74) includes the following breaking changes:
Expand Down
16 changes: 14 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,15 +35,23 @@ The following table provides a detailed list of all models materialized within t
To use this dbt package, you must have the following:

- At least one Fivetran Jira connector syncing data into your destination.
- A **BigQuery**, **Snowflake**, **Redshift**, or **PostgreSQL** destination.
- A **BigQuery**, **Snowflake**, **Redshift**, **Databricks**, or **PostgreSQL** destination.

### Databricks Dispatch Configuration
If you are using a Databricks destination with this package you will need to add the below (or a variation of the below) dispatch configuration within your `dbt_project.yml`. This is required in order for the package to accurately search for macros within the `dbt-labs/spark_utils` then the `dbt-labs/dbt_utils` packages respectively.
```yml
dispatch:
- macro_namespace: dbt_utils
search_order: ['spark_utils', 'dbt_utils']
```
## Step 2: Install the package
Include the following jira_source package version in your `packages.yml` file:
> TIP: Check [dbt Hub](https://hub.getdbt.com/) for the latest installation instructions or [read the dbt docs](https://docs.getdbt.com/docs/package-management) for more information on installing packages.
```yaml
packages:
- package: fivetran/jira
version: [">=0.11.0", "<0.12.0"]
version: [">=0.12.0", "<0.13.0"]
```
## Step 3: Define database and schema variables
Expand Down Expand Up @@ -121,7 +129,11 @@ packages:
- package: dbt-labs/dbt_utils
version: [">=1.0.0", "<2.0.0"]
- package: dbt-labs/spark_utils
version: [">=0.3.0", "<0.4.0"]
```

# 🙌 How is this package maintained and can I contribute?
## Package Maintenance
The Fivetran team maintaining this package _only_ maintains the latest version of the package. We highly recommend you stay consistent with the [latest version](https://hub.getdbt.com/fivetran/jira/latest/) of the package and refer to the [CHANGELOG](https://github.com/fivetran/dbt_jira/blob/main/CHANGELOG.md) and release notes for more information on changes across versions.
Expand Down
2 changes: 1 addition & 1 deletion dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: 'jira'
version: '0.11.0'
version: '0.12.1'
config-version: 2
require-dbt-version: [">=1.3.0", "<2.0.0"]
vars:
Expand Down
2 changes: 1 addition & 1 deletion docs/catalog.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/manifest.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/run_results.json

Large diffs are not rendered by default.

10 changes: 7 additions & 3 deletions integration_tests/dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: 'jira_integration_tests'
version: '0.11.0'
version: '0.12.1'
config-version: 2
profile: 'integration_tests'

Expand Down Expand Up @@ -29,7 +29,7 @@ vars:
jira_user_identifier: "user"
jira_version_identifier: "version"

issue_field_history_columns: ['summary', 'components', 'story points']
issue_field_history_columns: ['summary', 'components', 'story points']

seeds:
jira_integration_tests:
Expand Down Expand Up @@ -77,4 +77,8 @@ seeds:
start_date: timestamp
field:
+column_types:
id: "{{ 'string' if target.name in ('bigquery', 'spark', 'databricks') else 'varchar' }}"
id: "{{ 'string' if target.name in ('bigquery', 'spark', 'databricks') else 'varchar' }}"

dispatch:
- macro_namespace: dbt_utils
search_order: ['spark_utils', 'dbt_utils']
1 change: 1 addition & 0 deletions integration_tests/seeds/component.csv
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ id,_fivetran_synced,description,name,project_id
10004,2020-11-23 12:21:02.159,,Component 5,10001
10003,2020-11-23 12:21:02.159,,Component 4,10001
10002,2020-11-23 12:21:02.158,,Component 3,10001
10019,2020-11-23 12:21:02.157,,PI Portal (B2B),10001
1 change: 1 addition & 0 deletions integration_tests/seeds/field.csv
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ id,_fivetran_synced,is_array,is_custom,name
timeoriginalestimate,2020-11-23 22:20:39.685,false,false,Original Estimate
creator,2020-11-23 22:20:39.711,false,false,Creator
issuerestriction,2020-11-23 22:20:39.643,false,false,Restrict to
components,2020-11-23 22:20:40.643,false,false,Components
1 change: 1 addition & 0 deletions integration_tests/seeds/field_option.csv
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ id,_fivetran_synced,name
10104,2020-11-23 12:21:00.979,opt 2
10016,2020-11-17 12:20:52.552,To Do
10106,2020-11-17 12:20:52.227,opt 4
10019,2020-11-19 12:20:53.110,Impediment
1 change: 1 addition & 0 deletions integration_tests/seeds/issue_field_history.csv
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ field_id,issue_id,time,_fivetran_synced,value
created,10027,2020-11-10 19:19:41.472,2020-11-12 12:20:53.478,2020-11-10T19:19:41.472Z
customfield_10104,10027,2020-11-10 19:19:41.472,2020-11-12 12:20:53.472,3.0
summary,10027,2020-11-10 19:19:41.472,2020-11-12 12:20:53.500,"As a developer, I'd like to update story status during the sprint >> Click the Active sprints link at the top right of the screen to go to the Active sprints where the current Sprint's items can be updated"
components,10018,2020-11-10 19:19:41.472,2020-11-12 12:20:53.500,10019
1 change: 1 addition & 0 deletions integration_tests/seeds/issue_multiselect_history.csv
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ _fivetran_id,time,_fivetran_synced,field_id,issue_id,value
w4UiT+hPMxJp3RhL/YFJm3uWL5U=,2020-11-10 19:19:41.472,2020-11-12 12:20:53.506,subtasks,10027,0
4pVgGn0qSqR2hCmMdo4wWHXmgew=,2020-11-10 19:19:41.472,2020-11-12 12:20:53.479,customfield_10021,10027,0
/zrY8m6q0VMW6ia1jGIerXqLIgQ=,2020-11-10 19:19:41.472,2020-11-12 12:20:53.479,customfield_10020,10027,0
3p3gGn0qSqR2hCmMdo4wWHXa32m=,2020-11-10 19:19:41.472,2020-11-12 12:20:53.479,components,10027,0
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ joined as (
fields.field_name

from issue_multiselect_history
join fields using (field_id)
join fields on issue_multiselect_history.field_id = fields.field_id

)

Expand Down
79 changes: 53 additions & 26 deletions models/jira__daily_issue_field_history.sql
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,18 @@ statuses as (
from {{ var('status') }}
),


{% if var('jira_using_components', True) %}

component_data as (

select *
from {{ var('component') }}
),

{% endif %}


-- in intermediate/field_history/
calendar as (

Expand All @@ -63,17 +75,22 @@ joined as (
select
calendar.date_day,
calendar.issue_id

{% if is_incremental() %}
{% for col in pivot_data_columns if col.name|lower not in ['issue_day_id','issue_id','valid_starting_on'] %}
, coalesce(pivoted_daily_history.{{ col.name }}, most_recent_data.{{ col.name }}) as {{ col.name }}
{% endfor %}

{% else %}
{% for col in pivot_data_columns if col.name|lower not in ['issue_day_id','issue_id','valid_starting_on'] %}
, {{ col.name }}
{% endfor %}
{% endif %}

{% if is_incremental() %}
{% for col in pivot_data_columns if col.name|lower == 'components' and var('jira_using_components', True) %}
, coalesce(pivoted_daily_history.components, most_recent_data.component) as components
{% endfor %}
{% for col in pivot_data_columns if col.name|lower not in ['issue_day_id', 'issue_id', 'valid_starting_on', 'components'] %}
, coalesce(pivoted_daily_history.{{ col.name }}, most_recent_data.{{ col.name }}) as {{ col.name }}
{% endfor %}
{% else %}
{% for col in pivot_data_columns if col.name|lower == 'components' and var('jira_using_components', True) %}
, pivoted_daily_history.components
{% endfor %}
{% for col in pivot_data_columns if col.name|lower not in ['issue_day_id','issue_id','valid_starting_on','components'] %}
, {{ col.name }}
{% endfor %}
{% endif %}

from calendar
left join pivoted_daily_history
Expand All @@ -93,23 +110,31 @@ set_values as (
date_day,
issue_id,
statuses.status_name as status,
sum( case when statuses.status_name is null then 0 else 1 end) over ( partition by issue_id
order by date_day rows unbounded preceding) as status_field_partition
sum(case when statuses.status_name is null then 0 else 1 end) over ( partition by issue_id order by date_day rows unbounded preceding) as status_field_partition

{% for col in pivot_data_columns if col.name|lower == 'components' and var('jira_using_components', True) %}
, component_data.component_name as component
, sum(case when component_data.component_name is null then 0 else 1 end) over (partition by issue_id order by date_day rows unbounded preceding) as component_field_partition
{% endfor %}

{% for col in pivot_data_columns if col.name|lower not in ['issue_id','issue_day_id','valid_starting_on','status'] %}
{% for col in pivot_data_columns if col.name|lower not in ['issue_id', 'issue_day_id', 'valid_starting_on', 'status', 'components'] %}
, coalesce(field_option_{{ col.name }}.field_option_name, {{ col.name }}) as {{ col.name }}
-- create a batch/partition once a new value is provided
, sum( case when {{ col.name }} is null then 0 else 1 end) over ( partition by issue_id
order by date_day rows unbounded preceding) as {{ col.name }}_field_partition

order by date_day rows unbounded preceding) as {{ col.name }}_field_partition
{% endfor %}

from joined

left join statuses
on cast(statuses.status_id as {{ dbt.type_string() }}) = joined.status

{% if var('jira_using_components', True) %}
left join component_data
on cast(component_data.component_id as {{ dbt.type_string() }}) = joined.components
{% endif %}

{% for col in pivot_data_columns if col.name|lower not in ['issue_id','issue_day_id','valid_starting_on','status'] %}
{% for col in pivot_data_columns if col.name|lower not in ['issue_id', 'issue_day_id', 'valid_starting_on', 'status', 'components'] %}
left join field_option as field_option_{{ col.name }}
on cast(field_option_{{ col.name }}.field_id as {{ dbt.type_string() }}) = {{ col.name }}
{% endfor %}
Expand All @@ -120,15 +145,15 @@ fill_values as (
select
date_day,
issue_id,
first_value( status ) over (
partition by issue_id, status_field_partition
order by date_day asc rows between unbounded preceding and current row) as status
first_value(status) over (partition by issue_id, status_field_partition order by date_day asc rows between unbounded preceding and current row) as status

{% for col in pivot_data_columns if col.name|lower not in ['issue_id','issue_day_id','valid_starting_on','status'] %}
{% if var('jira_using_components', True) %}
, first_value(component) over (partition by issue_id, component_field_partition order by date_day asc rows between unbounded preceding and current row) as component
{% endif %}

{% for col in pivot_data_columns if col.name|lower not in ['issue_id', 'issue_day_id', 'valid_starting_on', 'status', 'components'] %}
-- grab the value that started this batch/partition
, first_value( {{ col.name }} ) over (
partition by issue_id, {{ col.name }}_field_partition
order by date_day asc rows between unbounded preceding and current row) as {{ col.name }}
, first_value( {{ col.name }} ) over ( partition by issue_id, {{ col.name }}_field_partition order by date_day asc rows between unbounded preceding and current row) as {{ col.name }}
{% endfor %}

from set_values
Expand All @@ -140,8 +165,11 @@ fix_null_values as (
date_day,
issue_id,
case when status = 'is_null' then null else status end as status
{% for col in pivot_data_columns if col.name|lower not in ['issue_id','issue_day_id','valid_starting_on','status'] %}

{% if var('jira_using_components', True) %}
, case when component = 'is_null' then null else component end as component
{% endif %}
{% for col in pivot_data_columns if col.name|lower not in ['issue_id','issue_day_id','valid_starting_on', 'status', 'components'] %}
-- we de-nulled the true null values earlier in order to differentiate them from nulls that just needed to be backfilled
, case when {{ col.name }} = 'is_null' then null else {{ col.name }} end as {{ col.name }}
{% endfor %}
Expand All @@ -155,7 +183,6 @@ surrogate_key as (
select
*,
{{ dbt_utils.generate_surrogate_key(['date_day','issue_id']) }} as issue_day_id

from fix_null_values
)

Expand Down

0 comments on commit 1053006

Please sign in to comment.