Skip to content

Commit

Permalink
[DOC] Removing User Guide pages that will be source of truth on docs.…
Browse files Browse the repository at this point in the history
…nvidia… (#9362)

* Removing User Guide pages that will be source of truth on docs.nvidia.com

Signed-off-by: mattahrens <[email protected]>

* Updating links in README to point to docs.nvidia.com User Guide

Signed-off-by: mattahrens <[email protected]>

* Fixing broken links in other pages for User Guide updates

Signed-off-by: mattahrens <[email protected]>

* Fixing broken links in supported ops page

Signed-off-by: mattahrens <[email protected]>

* Updating TypeChecks to keep in line with supported_ops.md

Signed-off-by: mattahrens <[email protected]>

---------

Signed-off-by: mattahrens <[email protected]>
  • Loading branch information
mattahrens authored Oct 3, 2023
1 parent 84b3a62 commit d340f2e
Show file tree
Hide file tree
Showing 31 changed files with 20 additions and 7,061 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ via the [RAPIDS](https://rapids.ai) libraries.

Documentation on the current release can be found [here](https://nvidia.github.io/spark-rapids/).

To get started and try the plugin out use the [getting started guide](./docs/get-started/getting-started.md).
To get started and try the plugin out use the [getting started guide](https://docs.nvidia.com/spark-rapids/user-guide/latest/getting-started/overview.html).

## Compatibility

Expand All @@ -17,7 +17,7 @@ Operator compatibility is documented [here](./docs/compatibility.md)
## Tuning

To get started tuning your job and get the most performance out of it please start with the
[tuning guide](./docs/tuning-guide.md).
[tuning guide](https://docs.nvidia.com/spark-rapids/user-guide/latest/tuning-guide.html).

## Configuration

Expand Down Expand Up @@ -46,7 +46,7 @@ Tests are described [here](tests/README.md).
## Integration
The RAPIDS Accelerator For Apache Spark does provide some APIs for doing zero copy data
transfer into other GPU enabled applications. It is described
[here](docs/additional-functionality/ml-integration.md).
[here](https://docs.nvidia.com/spark-rapids/user-guide/latest/additional-functionality/ml-integration.html).

Currently, we are working with XGBoost to try to provide this integration out of the box.

Expand All @@ -59,8 +59,8 @@ access to any of the memory that RMM is holding.
The Qualification and Profiling tools have been moved to
[nvidia/spark-rapids-tools](https://github.com/NVIDIA/spark-rapids-tools) repo.

Please refer to [Qualification tool documentation](docs/spark-qualification-tool.md)
and [Profiling tool documentation](docs/spark-profiling-tool.md)
Please refer to [Qualification tool documentation](https://docs.nvidia.com/spark-rapids/user-guide/latest/spark-qualification-tool.html)
and [Profiling tool documentation](https://docs.nvidia.com/spark-rapids/user-guide/latest/spark-profiling-tool.html)
for more details on how to use the tools.

## Dependency for External Projects
Expand Down
654 changes: 0 additions & 654 deletions docs/FAQ.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/additional-functionality/advanced_configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Name | Description | Default Value | Applicable at
<a name="python.memory.gpu.allocFraction"></a>spark.rapids.python.memory.gpu.allocFraction|The fraction of total GPU memory that should be initially allocated for pooled memory for all the Python workers. It supposes to be less than (1 - $(spark.rapids.memory.gpu.allocFraction)), since the executor will share the GPU with its owning Python workers. Half of the rest will be used if not specified|None|Runtime
<a name="python.memory.gpu.maxAllocFraction"></a>spark.rapids.python.memory.gpu.maxAllocFraction|The fraction of total GPU memory that limits the maximum size of the RMM pool for all the Python workers. It supposes to be less than (1 - $(spark.rapids.memory.gpu.maxAllocFraction)), since the executor will share the GPU with its owning Python workers. when setting to 0 it means no limit.|0.0|Runtime
<a name="python.memory.gpu.pooling.enabled"></a>spark.rapids.python.memory.gpu.pooling.enabled|Should RMM in Python workers act as a pooling allocator for GPU memory, or should it just pass through to CUDA memory allocation directly. When not specified, It will honor the value of config 'spark.rapids.memory.gpu.pooling.enabled'|None|Runtime
<a name="shuffle.enabled"></a>spark.rapids.shuffle.enabled|Enable or disable the RAPIDS Shuffle Manager at runtime. The [RAPIDS Shuffle Manager](rapids-shuffle.md) must already be configured. When set to `false`, the built-in Spark shuffle will be used. |true|Runtime
<a name="shuffle.enabled"></a>spark.rapids.shuffle.enabled|Enable or disable the RAPIDS Shuffle Manager at runtime. The [RAPIDS Shuffle Manager](https://docs.nvidia.com/spark-rapids/user-guide/latest/additional-functionality/rapids-shuffle.html) must already be configured. When set to `false`, the built-in Spark shuffle will be used. |true|Runtime
<a name="shuffle.mode"></a>spark.rapids.shuffle.mode|RAPIDS Shuffle Manager mode. "MULTITHREADED": shuffle file writes and reads are parallelized using a thread pool. "UCX": (requires UCX installation) uses accelerated transports for transferring shuffle blocks. "CACHE_ONLY": use when running a single executor, for short-circuit cached shuffle (for testing purposes).|MULTITHREADED|Startup
<a name="shuffle.multiThreaded.maxBytesInFlight"></a>spark.rapids.shuffle.multiThreaded.maxBytesInFlight|The size limit, in bytes, that the RAPIDS shuffle manager configured in "MULTITHREADED" mode will allow to be deserialized concurrently per task. This is also the maximum amount of memory that will be used per task. This should be set larger than Spark's default maxBytesInFlight (48MB). The larger this setting is, the more compressed shuffle chunks are processed concurrently. In practice, care needs to be taken to not go over the amount of off-heap memory that Netty has available. See https://github.com/NVIDIA/spark-rapids/issues/9153.|134217728|Startup
<a name="shuffle.multiThreaded.reader.threads"></a>spark.rapids.shuffle.multiThreaded.reader.threads|The number of threads to use for reading shuffle blocks per executor in the RAPIDS shuffle manager configured in "MULTITHREADED" mode. There are two special values: 0 = feature is disabled, falls back to Spark built-in shuffle reader; 1 = our implementation of Spark's built-in shuffle reader with extra metrics.|20|Startup
Expand Down
149 changes: 0 additions & 149 deletions docs/additional-functionality/delta-lake-support.md

This file was deleted.

45 changes: 0 additions & 45 deletions docs/additional-functionality/filecache.md

This file was deleted.

78 changes: 0 additions & 78 deletions docs/additional-functionality/iceberg-support.md

This file was deleted.

Loading

0 comments on commit d340f2e

Please sign in to comment.