Skip to content

Commit

Permalink
update versions for 2304 hot fix (#8248)
Browse files Browse the repository at this point in the history
Signed-off-by: liyuan <[email protected]>
  • Loading branch information
nvliyuan authored May 9, 2023
1 parent c963192 commit 6078f0e
Show file tree
Hide file tree
Showing 6 changed files with 21 additions and 20 deletions.
2 changes: 1 addition & 1 deletion docs/demo/Databricks/generate-init-script.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
{
"cell_type":"code",
"source":[
"dbutils.fs.mkdirs(\"dbfs:/databricks/init_scripts/\")\n \ndbutils.fs.put(\"/databricks/init_scripts/init.sh\",\"\"\"\n#!/bin/bash\nsudo wget -O /databricks/jars/rapids-4-spark_2.12-23.04.0.jar https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/23.04.0/rapids-4-spark_2.12-23.04.0.jar\n\"\"\", True)"
"dbutils.fs.mkdirs(\"dbfs:/databricks/init_scripts/\")\n \ndbutils.fs.put(\"/databricks/init_scripts/init.sh\",\"\"\"\n#!/bin/bash\nsudo wget -O /databricks/jars/rapids-4-spark_2.12-23.04.1.jar https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/23.04.1/rapids-4-spark_2.12-23.04.1.jar\n\"\"\", True)"
],
"metadata":{

Expand Down
12 changes: 6 additions & 6 deletions docs/dev/shims.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,17 +68,17 @@ Using JarURLConnection URLs we create a Parallel World of the current version wi
Spark 3.0.2's URLs:

```text
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/spark302/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/spark302/
```

Spark 3.2.0's URLs :

```text
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/spark320/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/spark320/
```

### Late Inheritance in Public Classes
Expand Down
13 changes: 7 additions & 6 deletions docs/download.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ cuDF jar, that is either preinstalled in the Spark classpath on all nodes or sub
that uses the RAPIDS Accelerator For Apache Spark. See the [getting-started
guide](https://nvidia.github.io/spark-rapids/Getting-Started/) for more details.

## Release v23.04.0
## Release v23.04.1
Hardware Requirements:

The plugin is tested on the following architectures:
Expand All @@ -41,9 +41,9 @@ for your hardware's minimum driver version.
*For Cloudera and EMR support, please refer to the
[Distributions](./FAQ.md#which-distributions-are-supported) section of the FAQ.

### Download v23.04.0
### Download v23.04.1
* Download the [RAPIDS
Accelerator for Apache Spark 23.04.0 jar](https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/23.04.0/rapids-4-spark_2.12-23.04.0.jar)
Accelerator for Apache Spark 23.04.1 jar](https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/23.04.1/rapids-4-spark_2.12-23.04.1.jar)

This package is built against CUDA 11.8 and all CUDA 11.x versions are supported through [CUDA forward
compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html). It is tested
Expand All @@ -52,17 +52,18 @@ do not have CUDA forward compatibility (for example, GeForce), CUDA 11.5 or late
need to ensure the minimum driver (450.80.02) and CUDA toolkit are installed on each Spark node.

### Verify signature
* Download the [RAPIDS Accelerator for Apache Spark 23.04.0 jar](https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/23.04.0/rapids-4-spark_2.12-23.04.0.jar)
and [RAPIDS Accelerator for Apache Spark 23.04.0 jars.asc](https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/23.04.0/rapids-4-spark_2.12-23.04.0.jar.asc)
* Download the [RAPIDS Accelerator for Apache Spark 23.04.1 jar](https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/23.04.1/rapids-4-spark_2.12-23.04.1.jar)
and [RAPIDS Accelerator for Apache Spark 23.04.1 jars.asc](https://repo1.maven.org/maven2/com/nvidia/rapids-4-spark_2.12/23.04.1/rapids-4-spark_2.12-23.04.1.jar.asc)
* Download the [PUB_KEY](https://keys.openpgp.org/[email protected]).
* Import the public key: `gpg --import PUB_KEY`
* Verify the signature: `gpg --verify rapids-4-spark_2.12-23.04.0.jar.asc rapids-4-spark_2.12-23.04.0.jar`
* Verify the signature: `gpg --verify rapids-4-spark_2.12-23.04.1.jar.asc rapids-4-spark_2.12-23.04.1.jar`

The output if signature verify:

gpg: Good signature from "NVIDIA Spark (For the signature of spark-rapids release jars) <[email protected]>"

### Release Notes
The 23.04.1 release patches a possible driver OOM which can occur on an executor broadcast.
New functionality and performance improvements for this release include:
* Introduces OOM retry framework for automatic OOM handling in memory-intensive operators, such as: join, aggregates and windows, coalescing, projections and filters.
* Support dynamic repartitioning in large/skewed hash joins
Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/getting-started-databricks.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ cluster.
```bash
spark.rapids.sql.python.gpu.enabled true
spark.python.daemon.module rapids.daemon_databricks
spark.executorEnv.PYTHONPATH /databricks/jars/rapids-4-spark_2.12-23.04.0.jar:/databricks/spark/python
spark.executorEnv.PYTHONPATH /databricks/jars/rapids-4-spark_2.12-23.04.1.jar:/databricks/spark/python
```
Note that since python memory pool require installing the cudf library, so you need to install cudf library in
each worker nodes `pip install cudf-cu11 --extra-index-url=https://pypi.nvidia.com` or disable python memory pool
Expand Down
4 changes: 2 additions & 2 deletions docs/get-started/getting-started-on-prem.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,13 +53,13 @@ CUDA and will not run on other versions. The jars use a classifier to keep them
- CUDA 11.x => classifier cuda11

For example, here is a sample version of the jar with CUDA 11.x support:
- rapids-4-spark_2.12-23.04.0-SNAPSHOT-cuda11.jar
- rapids-4-spark_2.12-23.04.1-SNAPSHOT-cuda11.jar

For simplicity export the location to this jar. This example assumes the sample jar above has
been placed in the `/opt/sparkRapidsPlugin` directory:
```shell
export SPARK_RAPIDS_DIR=/opt/sparkRapidsPlugin
export SPARK_RAPIDS_PLUGIN_JAR=${SPARK_RAPIDS_DIR}/rapids-4-spark_2.12-23.04.0-SNAPSHOT-cuda11.jar
export SPARK_RAPIDS_PLUGIN_JAR=${SPARK_RAPIDS_DIR}/rapids-4-spark_2.12-23.04.1-SNAPSHOT-cuda11.jar
```

## Install the GPU Discovery Script
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,13 +58,13 @@ import org.apache.spark.util.MutableURLClassLoader
E.g., Spark 3.2.0 Shim will use
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/spark320/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/spark320/
Spark 3.1.1 will use
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.0.jar!/spark311/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/spark3xx-common/
jar:file:/home/spark/rapids-4-spark_2.12-23.04.1.jar!/spark311/
Using these Jar URL's allows referencing different bytecode produced from identical sources
by incompatible Scala / Spark dependencies.
Expand Down

0 comments on commit 6078f0e

Please sign in to comment.