Releases: fivetran/dbt_jira
v0.19.0 dbt_jira
PR #133 contains the following updates:
Breaking Changes
- This change is marked as breaking due to its impact on Redshift configurations.
- For Redshift users, comment data aggregated under the
conversations
field in thejira__issue_enhanced
table is now disabled by default to prevent consistent errors related to Redshift's varchar length limits.- If you wish to re-enable
conversations
on Redshift, set thejira_include_conversations
variable totrue
in yourdbt_project.yml
.
- If you wish to re-enable
Under the Hood
- Updated the
comment
seed data to ensure conversations are correctly disabled for Redshift by default. - Renamed the
jira_is_databricks_sql_warehouse
macro tojira_is_incremental_compatible
, which was updated to returntrue
if the Databricks runtime is an all-purpose cluster (previously it checked only for a SQL warehouse runtime) or if the target is any other non-Databricks-supported destination.- This update addresses Databricks runtimes (e.g., endpoints and external runtimes) that do not support the
insert_overwrite
incremental strategy used in thejira__daily_issue_field_history
andint_jira__pivot_daily_field_history
models.
- This update addresses Databricks runtimes (e.g., endpoints and external runtimes) that do not support the
- For Databricks users, the
jira__daily_issue_field_history
andint_jira__pivot_daily_field_history
models will now apply the incremental strategy only if running on an all-purpose cluster. All other Databricks runtimes will not utilize an incremental strategy. - Added consistency tests for the
jira__project_enhanced
andjira__user_enhanced
models.
Full Changelog: v0.18.0...v0.19.0
v0.18.0 dbt_jira
PR #131 contains the following updates:
Breaking Changes
Since the following changes are breaking, a
--full-refresh
after upgrading will be required.
- Changed the partitioning from days to weeks in the following models for BigQuery and Databricks All Purpose Cluster destinations:
int_jira__pivot_daily_field_history
- Added field
valid_starting_at_week
for use with the new weekly partition logic.
- Added field
jira__daily_issue_field_history
- Added field
date_week
for use with the new weekly partition logic.
- Added field
- This adjustment reduces the total number of partitions, helping avoid partition limit issues in certain warehouses.
- For Databricks All Purpose Cluster destinations, updated the
file_format
todelta
for improved performance. - Updated the default materialization of
int_jira__issue_calendar_spine
from incremental to ephemeral to improve performance and maintainability.
Documentation Update
- Updated README with the new default of 1 week for the
lookback_window
variable.
Under the Hood
- Replaced the deprecated
dbt.current_timestamp_backcompat()
function withdbt.current_timestamp()
to ensure all timestamps are captured in UTC for the following models:int_jira__issue_calendar_spine
int_jira__issue_join
jira__issue_enhanced
- Updated model
int_jira__issue_calendar_spine
to prevent errors during compilation. - Added consistency tests for the
jira__daily_issue_field_history
andjira__issue_enhanced
models.
Full Changelog: v0.17.0...v0.18.0
v0.17.0 dbt_jira
PR #127 contains the following updates:
🚨 Breaking Changes 🚨
⚠️ Since the following changes are breaking, a--full-refresh
after upgrading will be required.
- To reduce storage, updated the default materialization of the upstream staging models to views. (See the dbt_jira_source CHANGELOG for more details.)
Performance improvements (🚨 Breaking Changes 🚨)
-
Updated the incremental strategy of the following models to
insert_overwrite
for BigQuery and Databricks All Purpose Cluster destinations anddelete+insert
for all other supported destinations.int_jira__issue_calendar_spine
int_jira__pivot_daily_field_history
jira__daily_issue_field_history
At this time, models for Databricks SQL Warehouse destinations are materialized as tables without support for incremental runs.
-
Removed intermediate models
int_jira__agg_multiselect_history
,int_jira__combine_field_histories
, andint_jira__daily_field_history
by combining them withint_jira__pivot_daily_field_history
. This is to reduce the redundancy of the data stored in tables, the number of full scans, and the volume of write operations.- Note that if you have previously run this package, these models may still exist in your destination schema, however they will no longer be updated.
-
Updated the default materialization of
int_jira__issue_type_parents
from a table to a view. This model is called only inint_jira__issue_users
, so a view will reduce storage requirements while not significantly hindering performance. -
For Snowflake and BigQuery destinations, added the following
cluster_by
columns to the configs for incremental models:int_jira__issue_calendar_spine
clustering on columns['date_day', 'issue_id']
int_jira__pivot_daily_field_history
clustering on columns['valid_starting_on', 'issue_id']
jira__daily_issue_field_history
clustering on columns['date_day', 'issue_id']
-
For Databricks All Purpose Cluster destinations, updated incremental model file formats to
parquet
for compatibility with theinsert_overwrite
strategy.
Features
- Added a default 3-day look-back to incremental models to accommodate late arriving records. The number of days can be changed by setting the var
lookback_window
in your dbt_project.yml. See the Lookback Window section of the README for more details. - Added macro
jira_lookback
to streamline the lookback window calculation.
Under the Hood:
- Added integration testing pipeline for Databricks SQL Warehouse.
- Added macro
jira_is_databricks_sql_warehouse
for detecting if a Databricks target is an All Purpose Cluster or a SQL Warehouse. - Updated the maintainer pull request template.
Full Changelog: v0.16.0...v0.17.0
v0.16.0 dbt_jira
PR #122 contains the following updates:
🚨 Breaking Changes: Bug Fixes 🚨
- The following fields in the below mentioned models have been converted to a string datatype (previously integer) to ensure classic Jira projects may link issues to epics. In classic Jira projects the epic reference is in a hyperlink form (ie. "https://ulr-here/epic-key") as opposed to an ID. As such, a string datatype is needed to successfully link issues to epics. If you are referencing these fields downstream, be sure to make any changes to account for the new datatype.
revised_parent_issue_id
field within theint_jira__issue_type_parents
modelparent_issue_id
field within thejira__issue_enhanced
model
Documentation updates
- Update README to highlight requirements for using custom fields with the
issue_field_history_columns
variable.
Under the Hood
- Included auto-releaser GitHub Actions workflow to automate future releases.
- Updated the maintainer PR template to resemble the most up to date format.
- Updated
field
andissue_field_history
seed files to ensure we have an updated test case to capture the epic-link scenario for classic Jira environments.
Full Changelog: v0.15.0...v0.16.0
v0.15.0 dbt_jira
PR #108 contains the following updates:
🚨 Breaking Changes 🚨
- Updated the
jira__daily_issue_field_history
model to make sureissue_type
values are correctly joined into the downstream issue models. This applied only ifissue type
is leveraged within theissue_field_history_columns
variable.
Note: Please be aware that a
dbt run --full-refresh
will be required after upgrading to this version in order to capture the updates.
Full Changelog: v0.14.0...v0.15.0
v0.14.0 dbt_jira
🚨 Breaking Changes 🚨
- Fixed the
jira__daily_issue_field_history
model to make surecomponent
values are correctly joined into the downstream issue models. This applied only ifcomponents
are leveraged within theissue_field_history_columns
variable. (PR #99)
Note: Please be aware that a
dbt run --full-refresh
will be required after upgrading to this version in order to capture the updates.
Bug Fixes
- Updated the
int_jira__issue_calendar_spine
logic, which now references theint_jira__field_history_scd
model as an upstream dependency. (PR #104) - Modified the
open_until
field within theint_jira__issue_calendar_spine
model to be dependent on theint_jira__field_history_scd
model'svalid_starting_on
column as opposed to theissue
table'supdated_at
field. (PR #104)- This is required as some resolved issues (outside of the 30 day or
jira_issue_history_buffer
variable window) were having faulty incremental loads due to untracked fields (fields not tracked via theissue_field_history_columns
variable or other fields not identified in the history tables such as Links, Comments, etc.). This caused theupdated_at
column to update, but there were no tracked fields that were updated, thus causing a faulty incremental load.
- This is required as some resolved issues (outside of the 30 day or
Under the Hood
- Added additional seed rows to ensure the new configuration for components properly runs for all edge cases and compare against normal issue field history fields like
summary
. (PR #104) - Incorporated the new
fivetran_utils.drop_schemas_automation
macro into the end of each Buildkite integration test job. (PR #98) - Updated the pull request templates. (PR #98)
Contributors
Full Changelog: v0.13.0...v0.14.0
v0.13.0 dbt_jira
🚨 Breaking Changes 🚨:
PR #95 applies the following changes:
- Added the
status_id
column as a default field for thejira__daily_issue_field_history
model. This is required to perform an accurate join for thestatus
field in incremental runs.- Please be aware a
dbt run --full-refresh
will be required following this upgrade.
- Please be aware a
🎉 Feature Updates 🎉
PR #93 applies the following changes:
- Adds the option to use
field_name
instead offield_id
as the field-grain for issue field history transformations. Previously, the package would strictly partition and join issue field data usingfield_id
. However, this assumed that it was impossible to have fields with the same name in Jira. For instance, it is very easy to create anotherSprint
field, and different Jira users across your organization may choose the wrong or inconsistent version of the field.- Thus, to treat these as the same field, set the new
jira_field_grain
variable to'field_name'
in yourdbt_project.yml
file. You must run a full refresh to accurately fold this change in.
- Thus, to treat these as the same field, set the new
Under the Hood
PR #95 applies the following changes:
- With the addition of the default
status_id
field in thejira__daily_issue_field_history
model, there is no longer a need to do the extra partitioning to fill values for thestatus
field. As such, thestatus
partitions were removed in place ofstatus_id
. However, in the final cte of the model we join in the status staging model to populate the appropriate status per the accurate status_id for the given day.
Contributors
Full Changelog: v0.12.2...v0.13.0
v0.12.2 dbt_jira
Bug Fixes
- Reverting the changes introduced within v0.12.1 except Databricks compatibility. Please stay tuned for a future release that will integrate the v0.12.1 changes in a bug free release. (#88)
v0.12.1 dbt_jira
🚨 There has been an identified bug from this release. 🚨
While we investigate this bug, please downgrade your version to v0.11.0. You can expect an update from our team shortly.
Breaking Changes
- Fixed
jira__daily_issue_field_history
model to make sure component values are correctly joined into our issue models (#81). - Please note, a
dbt run --full-refresh
will be required after upgrading to this version in order to capture the updates.
🎉 Feature Updates 🎉
- Databricks compatibility 🧱 (#80).
🌊 Changes 🌊
Full Changelog: v0.12.0...v0.12.1
v0.12.0 dbt_jira
☕ Accidental Release ☕
This was an accidental release with no changes applied. Please see the full changelog below for evidence of no changes. The subsequent v0.12.1 release contains the intended changes that were committed from this release.