Skip to content

Commit

Permalink
Remove an unused config shuffle.spillThreads (NVIDIA#11595)
Browse files Browse the repository at this point in the history
* Remove an unused config shuffle.spillThreads

Signed-off-by: Alessandro Bellina <[email protected]>

* update configs.md

---------

Signed-off-by: Alessandro Bellina <[email protected]>
  • Loading branch information
abellina authored Oct 14, 2024
1 parent 2d3e0ec commit 11964ae
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 7 deletions.
1 change: 0 additions & 1 deletion docs/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ Name | Description | Default Value | Applicable at
<a name="sql.multiThreadedRead.numThreads"></a>spark.rapids.sql.multiThreadedRead.numThreads|The maximum number of threads on each executor to use for reading small files in parallel. This can not be changed at runtime after the executor has started. Used with COALESCING and MULTITHREADED readers, see spark.rapids.sql.format.parquet.reader.type, spark.rapids.sql.format.orc.reader.type, or spark.rapids.sql.format.avro.reader.type for a discussion of reader types. If it is not set explicitly and spark.executor.cores is set, it will be tried to assign value of `max(MULTITHREAD_READ_NUM_THREADS_DEFAULT, spark.executor.cores)`, where MULTITHREAD_READ_NUM_THREADS_DEFAULT = 20.|20|Startup
<a name="sql.reader.batchSizeBytes"></a>spark.rapids.sql.reader.batchSizeBytes|Soft limit on the maximum number of bytes the reader reads per batch. The readers will read chunks of data until this limit is met or exceeded. Note that the reader may estimate the number of bytes that will be used on the GPU in some cases based on the schema and number of rows in each batch.|2147483647|Runtime
<a name="sql.reader.batchSizeRows"></a>spark.rapids.sql.reader.batchSizeRows|Soft limit on the maximum number of rows the reader will read per batch. The orc and parquet readers will read row groups until this limit is met or exceeded. The limit is respected by the csv reader.|2147483647|Runtime
<a name="sql.shuffle.spillThreads"></a>spark.rapids.sql.shuffle.spillThreads|Number of threads used to spill shuffle data to disk in the background.|6|Runtime
<a name="sql.udfCompiler.enabled"></a>spark.rapids.sql.udfCompiler.enabled|When set to true, Scala UDFs will be considered for compilation as Catalyst expressions|false|Runtime

For more advanced configs, please refer to the [RAPIDS Accelerator for Apache Spark Advanced Configuration](./additional-functionality/advanced_configs.md) page.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -551,12 +551,6 @@ val GPU_COREDUMP_PIPE_PATTERN = conf("spark.rapids.gpu.coreDump.pipePattern")
.integerConf
.createWithDefault(2)

val SHUFFLE_SPILL_THREADS = conf("spark.rapids.sql.shuffle.spillThreads")
.doc("Number of threads used to spill shuffle data to disk in the background.")
.commonlyUsed()
.integerConf
.createWithDefault(6)

val GPU_BATCH_SIZE_BYTES = conf("spark.rapids.sql.batchSizeBytes")
.doc("Set the target number of bytes for a GPU batch. Splits sizes for input data " +
"is covered by separate configs. The maximum setting is 2 GB to avoid exceeding the " +
Expand Down

0 comments on commit 11964ae

Please sign in to comment.