You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using this tap to extract s3 inventory reports from an s3 bucket and trying to fine tune the replication method. I have set the replication in the meltano.yml as follows:
However it seems that because the s3 inventory reports are written daily and have their own timestamp at the file level, the tap is defaulting to using the overall file modified date vs. the column level modified date. My desired outcome is to only load records from the latest report that have a last_modified_date > the last ELT run, but right now its loading all records/rows from the latest report if the report modified date is > the last ELT run. Has anyone run into this? Am I missing a setting to override the file level and use the more granular row level modified date for
The text was updated successfully, but these errors were encountered:
@nicholasvk from what I can tell from reading the source code, metadata configuration values are simply ignored by the tap. I am attempting to use a FULL_REFRESH replication method, but it is not honored.
I am using this tap to extract s3 inventory reports from an s3 bucket and trying to fine tune the replication method. I have set the replication in the meltano.yml as follows:
However it seems that because the s3 inventory reports are written daily and have their own timestamp at the file level, the tap is defaulting to using the overall file modified date vs. the column level modified date. My desired outcome is to only load records from the latest report that have a last_modified_date > the last ELT run, but right now its loading all records/rows from the latest report if the report modified date is > the last ELT run. Has anyone run into this? Am I missing a setting to override the file level and use the more granular row level modified date for
The text was updated successfully, but these errors were encountered: