Skip to content

Commit

Permalink
docs(spark): fix incorrect config option (datahub-project#11119)
Browse files Browse the repository at this point in the history
  • Loading branch information
steffengr authored Oct 31, 2024
1 parent 8b72076 commit 4634bbc
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion metadata-integration/java/acryl-spark-lineage/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ information like tokens.
| spark.datahub.platform.s3.path_spec_list | | | List of pathspec per platform |
| spark.datahub.metadata.dataset.include_schema_metadata | false | | Emit dataset schema metadata based on the spark execution. It is recommended to get schema information from platform specific DataHub sources as this is less reliable |
| spark.datahub.flow_name | | | If it is set it will be used as the DataFlow name otherwise it uses spark app name as flow_name |
| spark.datahub.partition_regexp_pattern | | | Strip partition part from the path if path end matches with the specified regexp. Example `year=.*/month=.*/day=.*` |
| spark.datahub.file_partition_regexp | | | Strip partition part from the path if path end matches with the specified regexp. Example `year=.*/month=.*/day=.*` |
| spark.datahub.tags | | | Comma separated list of tags to attach to the DataFlow |
| spark.datahub.domains | | | Comma separated list of domain urns to attach to the DataFlow |
| spark.datahub.stage_metadata_coalescing | | | Normally it coalesces and sends metadata at the onApplicationEnd event which is never called on Databricks or on Glue. You should enable this on Databricks if you want coalesced run. |
Expand Down

0 comments on commit 4634bbc

Please sign in to comment.