-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start to use explicit memory limits in the parquet chunked reader #9991
Start to use explicit memory limits in the parquet chunked reader #9991
Conversation
Signed-off-by: Robert (Bobby) Evans <[email protected]>
val passReadLimit = if (useSubPageChunked) { | ||
4 * chunkSizeByteLimit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we set some multiplier constant or configurable constant for it, instead of hard coding 4X
like this? In the (near) future, chunked ORC reader may have benefit from it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure where would be a good place to put this. It is a magic number based on our estimates that we will take 4x the target batch size as the amount of memory that we are allowed to use. GpuDataProducer is the only place that might be common between them, but that is not a proper place for it. I could create a static object to hold it somewhere. GpuConventionMagicNumbers or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay then let's see what can we do the next time with the ORC chunked reader. I'm fine to leave this as-is for now.
sql-plugin/src/main/scala/com/nvidia/spark/rapids/RapidsConf.scala
Outdated
Show resolved
Hide resolved
Signed-off-by: Robert (Bobby) Evans <[email protected]>
build |
@ttnghia could you please take another look? |
.doc("Enable a chunked reader where possible for reading data that is smaller " + | ||
"than the typical row group/page limit. Currently this only works for parquet.") | ||
.booleanConf | ||
.createWithDefault(false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should have a followup issue to turn this on in 24.04.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This depends on rapidsai/cudf#14360