Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default to ZSTD compression when writing Parquet #981

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 13 additions & 5 deletions python/datafusion/dataframe.py
Original file line number Diff line number Diff line change
Expand Up @@ -620,16 +620,24 @@ def write_csv(self, path: str | pathlib.Path, with_header: bool = False) -> None
def write_parquet(
self,
path: str | pathlib.Path,
compression: str = "uncompressed",
compression: str = "ZSTD",
compression_level: int | None = None,
) -> None:
"""Execute the :py:class:`DataFrame` and write the results to a Parquet file.

Args:
path: Path of the Parquet file to write.
compression: Compression type to use.
compression_level: Compression level to use.
"""
path (str | pathlib.Path): The file path to write the Parquet file.
compression (str): The compression algorithm to use. Default is "ZSTD".
compression_level (int | None): The compression level to use. For ZSTD, the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should document that the compression level is different per algorithm. It's only zstd that has a 1-22 range IIRC.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean like

compression_level (int | None): The compression level to use. For ZSTD, the
            recommended range is 1 to 22, with the default being 3. Higher levels
            provide better compression but slower speed.

recommended range is 1 to 22, with the default being 3. Higher levels
provide better compression but slower speed.
"""
# default compression level to 3 for ZSTD
if compression == "ZSTD":
if compression_level is None:
compression_level = 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 seems like an awfully low compression default. We should evaluate what other libraries use as the default compression setting.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be nice to dig into what DuckDB's defaults are: https://duckdb.org/docs/data/parquet/overview.html#writing-to-parquet-files

Copy link
Contributor Author

@kosiew kosiew Dec 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 seems like an awfully low compression default. We should evaluate what other libraries use as the default compression setting.

I used the default compression level in the manual from Facebook (author of zstd) - https://facebook.github.io/zstd/zstd_manual.html

I could not find a default in DuckDB's documentation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @kylebarron ,

Shall we adopt delta-rs' default, and use 4 as the default ZSTD compression level?

elif not (1 <= compression_level <= 22):
raise ValueError("Compression level for ZSTD must be between 1 and 22")
self.df.write_parquet(str(path), compression, compression_level)

def write_json(self, path: str | pathlib.Path) -> None:
Expand Down
Loading