Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge branch-24.08 into main #115

Closed
wants to merge 98 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
98 commits
Select commit Hold shift + click to select a range
f9076a0
Init version 24.08.0-SNAPSHOT
NvTimLiu May 22, 2024
02a70d4
Merge pull request #10879 from NVIDIA/branch-24.06
nvauto May 23, 2024
0df3d05
Merge pull request #10883 from NVIDIA/branch-24.06
nvauto May 23, 2024
800ca6b
Merge pull request #10885 from NVIDIA/branch-24.06
nvauto May 24, 2024
ec9221f
Merge pull request #10888 from NVIDIA/branch-24.06
nvauto May 24, 2024
8a13793
Merge pull request #10926 from NVIDIA/branch-24.06
nvauto May 28, 2024
4e4be54
Merge pull request #10927 from NVIDIA/branch-24.06
nvauto May 28, 2024
02f4595
append zpuller to authorized user of blossom-ci (#10929)
zpuller May 28, 2024
2e8d43f
Merge pull request #10932 from NVIDIA/branch-24.06
nvauto May 28, 2024
2dce03d
Merge pull request #10935 from NVIDIA/branch-24.06
nvauto May 28, 2024
69cca07
Merge pull request #10936 from NVIDIA/branch-24.06
nvauto May 28, 2024
6086cac
Merge pull request #10937 from NVIDIA/branch-24.06
nvauto May 29, 2024
35b1575
Merge pull request #10939 from NVIDIA/branch-24.06
nvauto May 29, 2024
f0b13ed
Fixed Databricks build [databricks] (#10933)
razajafri May 29, 2024
dfcde72
Add classloader diagnostics to initShuffleManager error message (#10871)
zpuller May 29, 2024
fe69470
Add Support for Multiple Filtering Keys for Subquery Broadcast [datab…
razajafri May 30, 2024
e23bf38
Unarchive Spark test jar for spark.read(ability) (#10946)
gerashegalov May 30, 2024
a7cdaa9
Added Shim for BatchScanExec to Support Spark 4.0 [databricks] (#10944)
razajafri May 30, 2024
499a45b
[Spark 4.0] Account for `CommandUtils.uncacheTableOrView` signature c…
mythrocks May 30, 2024
4024ef6
GpuInsertIntoHiveTable supports parquet format (#10912)
firestarman May 31, 2024
822ad9b
[Spark 4.0] Account for `PartitionedFileUtil.splitFiles` signature ch…
mythrocks May 31, 2024
2a86bb5
Change dependency version to 24.08.0-SNAPSHOT (#10949)
NvTimLiu May 31, 2024
bbdcac0
Merge pull request #10954 from NVIDIA/branch-24.06
nvauto May 31, 2024
2977c14
Add Support for Renaming of PythonMapInArrow [databricks] (#10931)
razajafri May 31, 2024
1be42d4
fix build errors for 4.0 shim (#10952)
firestarman Jun 1, 2024
5750ace
Add new blossom-ci allowed user (#10959)
pxLi Jun 3, 2024
8d3a8ce
Add default value for REF of premerge jenkinsfile to avoid bad overwr…
pxLi Jun 4, 2024
4707406
Use ErrorClass to Throw AnalysisException [databricks] (#10830)
razajafri Jun 4, 2024
3111e2b
Move Support for `RaiseError` to a Shim Excluding Spark 4.0.0 [databr…
razajafri Jun 4, 2024
149e0d5
Fix a hive write test failure (#10958)
firestarman Jun 5, 2024
d514999
Speed up the integration tests by running them in parallel on the Dat…
NvTimLiu Jun 5, 2024
4d3b346
More compilation fixes for Spark 4.0.0 [databricks] (#10978)
razajafri Jun 5, 2024
c7129f5
Add rapids configs to enable GPU running (#10963)
GaryShen2008 Jun 7, 2024
18c2579
Fix Spark UT issues in RapidsDataFrameAggregateSuite (#10943)
thirtiseven Jun 8, 2024
9030b13
Addressing the Spark change of renaming the named parameter (#10992)
razajafri Jun 8, 2024
d7b6f55
Merge branch-24.06 into branch-24.08
NvTimLiu Jun 10, 2024
586e6f4
Increase the console output of buildall upon build failures (#10998)
AdvaitChandorkar07 Jun 10, 2024
416fa60
Merge branch branch-24.06 into branch-24.08
NvTimLiu Jun 11, 2024
f47c205
Allow ProjectExec fall fallback to CPU for 350 (#11032)
firestarman Jun 11, 2024
1ca4c44
Update blossom-ci ACL to secure format (#11036)
pxLi Jun 11, 2024
4e9b961
Append new authorized user to blossom-ci whitelist [skip ci] (#11040)
Feng-Jiang28 Jun 11, 2024
9f73672
Merge pull request #11035 from NvTimLiu/fix-auto-merge-conflict-11034
jlowe Jun 11, 2024
2cf5934
Rewrite multiple literal choice regex to multiple contains in rlike (…
thirtiseven Jun 12, 2024
d9686d4
Add in the ability to fingerprint JSON columns (#11002)
revans2 Jun 12, 2024
73d76cf
Concat() Exception bug fix (#11039)
Feng-Jiang28 Jun 13, 2024
f355af5
Merge pull request #11055 from NVIDIA/branch-24.06
nvauto Jun 13, 2024
05187aa
Merge pull request #11057 from NVIDIA/branch-24.06
nvauto Jun 13, 2024
cfd8f00
Revert "Add in the ability to fingerprint JSON columns (#11002)"
revans2 Jun 13, 2024
900ae6f
Merge pull request #11059 from revans2/revert_json_datagen
revans2 Jun 13, 2024
531a9f5
Add in the ability to fingerprint JSON columns [databricks] (#11060)
revans2 Jun 13, 2024
eb1549c
`binary-dedupe` changes for Spark 4.0.0 [databricks] (#10993)
razajafri Jun 13, 2024
356d5a1
[FEA] Increase parallelism of deltalake test on databricks (#11051)
liurenjie1024 Jun 14, 2024
599ae17
fix flaky array_item test failures (#11054)
binmahone Jun 14, 2024
2f3c0c2
Calculate parallelism to speed up pre-merge CI (#11046)
NvTimLiu Jun 14, 2024
6eb854d
WAR numpy2 failed fastparquet compatibility issue (#11072)
pxLi Jun 17, 2024
0952dea
Fallback non-UTC TimeZoneAwareExpression with zoneId [databricks] (#1…
thirtiseven Jun 18, 2024
7bac3a6
[FEA] Introduce low shuffle merge. (#10979)
liurenjie1024 Jun 19, 2024
4b44903
Support bucketing write for GPU (#10957)
firestarman Jun 24, 2024
18ec4b2
upgrade actions version (#11086)
YanxuanLiu Jun 25, 2024
86a905a
Fixed Failing tests in arithmetic_ops_tests for Spark 4.0.0 [databric…
razajafri Jun 25, 2024
7a8690f
fix duplicate counted metrics like op time for GpuCoalesceBatches (#1…
binmahone Jun 25, 2024
b3b5b5e
Add GpuBucketingUtils shim to Spark 4.0.0 (#11092)
razajafri Jun 25, 2024
6455396
Improve the diagnostics for 'conv' fallback explain (#11076)
jihoonson Jun 25, 2024
34e6bc8
Disable ANSI mode for window function tests [databricks] (#11073)
mythrocks Jun 26, 2024
3cb54c4
Fix some test issues in Spark UT and keep RapidsTestSettings update-t…
thirtiseven Jun 27, 2024
9dafc54
exclude a case based on JDK version (#11083)
thirtiseven Jun 27, 2024
3b6c5cd
Replaced spark3xx-common references to spark-shared [databricks] (#11…
razajafri Jun 28, 2024
7dc52bc
Fixed some cast_tests (#11049)
razajafri Jun 28, 2024
dd62000
Fixed array_tests for Spark 4.0.0 [databricks] (#11048)
razajafri Jun 28, 2024
f954026
Add a heuristic to skip second or third agg pass (#10950)
binmahone Jun 29, 2024
2498204
Support regex patterns with brackets when rewriting to PrefixRange pa…
thirtiseven Jun 29, 2024
f56fe2c
Fix match error in RapidsShuffleIterator.scala [scala2.13] (#11115)
xieshuaihu Jul 1, 2024
850365c
Spark 4: Handle ANSI mode in sort_test.py (#11099)
mythrocks Jul 1, 2024
9bb295a
Introduce LORE framework. (#11084)
liurenjie1024 Jul 2, 2024
ba64999
Update Scala2.13 premerge CI against JDK17 (#11117)
NvTimLiu Jul 2, 2024
e92cbd2
Profiler: Disable collecting async allocation events by default (#10965)
jlowe Jul 2, 2024
6cd094e
Dataproc serverless test fixes (#11043)
NVnavkumar Jul 3, 2024
5635fd4
Fix issue with DPP and AQE on reused broadcast exchanges [databricks]…
revans2 Jul 3, 2024
6fa51ba
Fix miscellaneous integ tests for Spark 4 [databricks] (#11097)
mythrocks Jul 3, 2024
b98b03f
Fix test_window_group_limits_fallback. (#11133)
mythrocks Jul 3, 2024
c592d73
Improve MetricsSuite to allow more gc jitter [databricks] (#11139)
binmahone Jul 4, 2024
2ffaf94
Add `HiveHash` support on GPU (#11094)
firestarman Jul 5, 2024
c49d693
Handle the change for UnaryPositive now extending RuntimeReplaceable …
razajafri Jul 5, 2024
78f0403
Update fastparquet to 2024.5.0 for numpy2 compatibility [databricks] …
pxLi Jul 8, 2024
7894d51
upgrade ucx to 1.17.0 (#11147)
zpuller Jul 8, 2024
d804188
Fix the test error of bucketed write for non-utc (#11151)
firestarman Jul 8, 2024
18babed
Fix ANSI mode failures in subquery_test.py [databricks] (#11102)
mythrocks Jul 8, 2024
a056f16
Fix LORE dump oom. (#11153)
liurenjie1024 Jul 9, 2024
6f36d35
Fix batch splitting for partition column size on row-count-only batch…
jlowe Jul 9, 2024
29904a3
Add deletion vector metrics for low shuffle merge. (#11132)
liurenjie1024 Jul 9, 2024
befb3a5
fix the bucketed write error for non-utc cases (#11164)
firestarman Jul 10, 2024
aede72f
Coalesce batches after a logical coalesce operation (#11126)
revans2 Jul 10, 2024
e9d097f
Fix some GpuBroadcastToRowExec by not dropping columns [databricks] (…
revans2 Jul 11, 2024
451463f
Case when performance improvement: reduce the `copy_if_else` [databri…
res-life Jul 12, 2024
be34c6a
Drop spark31x shims [databricks] (#11159)
NvTimLiu Jul 12, 2024
3c89a31
Avoid listFiles or inputFiles on relations with static partitioning (…
jlowe Jul 12, 2024
44b2b92
Merge branch-24.08 into main
nvauto Jul 14, 2024
98917eb
Change version to 24.08.0
nvauto Jul 14, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
85 changes: 45 additions & 40 deletions .github/workflows/blossom-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,44 +33,49 @@ jobs:
args: ${{ env.args }}

# This job only runs for pull request comments
if: contains( '\
abellina,\
anfeng,\
firestarman,\
GaryShen2008,\
jlowe,\
kuhushukla,\
mythrocks,\
nartal1,\
nvdbaranec,\
NvTimLiu,\
razajafri,\
revans2,\
rwlee,\
sameerz,\
tgravescs,\
wbo4958,\
wjxiz1992,\
sperlingxx,\
hyperbolic2346,\
gerashegalov,\
ttnghia,\
nvliyuan,\
res-life,\
HaoYang670,\
NVnavkumar,\
amahussein,\
mattahrens,\
YanxuanLiu,\
cindyyuanjiang,\
thirtiseven,\
winningsix,\
viadea,\
yinqingh,\
parthosa,\
liurenjie1024,\
binmahone,\
', format('{0},', github.actor)) && github.event.comment.body == 'build'
if: |
github.event.comment.body == 'build' &&
(
github.actor == 'abellina' ||
github.actor == 'anfeng' ||
github.actor == 'firestarman' ||
github.actor == 'GaryShen2008' ||
github.actor == 'jlowe' ||
github.actor == 'kuhushukla' ||
github.actor == 'mythrocks' ||
github.actor == 'nartal1' ||
github.actor == 'nvdbaranec' ||
github.actor == 'NvTimLiu' ||
github.actor == 'razajafri' ||
github.actor == 'revans2' ||
github.actor == 'rwlee' ||
github.actor == 'sameerz' ||
github.actor == 'tgravescs' ||
github.actor == 'wbo4958' ||
github.actor == 'wjxiz1992' ||
github.actor == 'sperlingxx' ||
github.actor == 'hyperbolic2346' ||
github.actor == 'gerashegalov' ||
github.actor == 'ttnghia' ||
github.actor == 'nvliyuan' ||
github.actor == 'res-life' ||
github.actor == 'HaoYang670' ||
github.actor == 'NVnavkumar' ||
github.actor == 'amahussein' ||
github.actor == 'mattahrens' ||
github.actor == 'YanxuanLiu' ||
github.actor == 'cindyyuanjiang' ||
github.actor == 'thirtiseven' ||
github.actor == 'winningsix' ||
github.actor == 'viadea' ||
github.actor == 'yinqingh' ||
github.actor == 'parthosa' ||
github.actor == 'liurenjie1024' ||
github.actor == 'binmahone' ||
github.actor == 'zpuller' ||
github.actor == 'pxLi' ||
github.actor == 'Feng-Jiang28'
)
steps:
- name: Check if comment is issued by authorized person
run: blossom-ci
Expand All @@ -85,15 +90,15 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
repository: ${{ fromJson(needs.Authorization.outputs.args).repo }}
ref: ${{ fromJson(needs.Authorization.outputs.args).ref }}
lfs: 'true'

# repo specific steps
- name: Setup java
uses: actions/setup-java@v3
uses: actions/setup-java@v4
with:
distribution: adopt
java-version: 8
Expand Down
29 changes: 13 additions & 16 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ mvn verify

After a successful build, the RAPIDS Accelerator jar will be in the `dist/target/` directory.
This will build the plugin for a single version of Spark. By default, this is Apache Spark
3.1.1. To build against other versions of Spark you use the `-Dbuildver=XXX` command line option
to Maven. For instance to build Spark 3.1.1 you would use:
3.2.0. To build against other versions of Spark you use the `-Dbuildver=XXX` command line option
to Maven. For instance to build Spark 3.2.0 you would use:

```shell script
mvn -Dbuildver=311 verify
mvn -Dbuildver=320 verify
```
You can find all available build versions in the top level pom.xml file. If you are building
for Databricks then you should use the `jenkins/databricks/build.sh` script and modify it for
Expand Down Expand Up @@ -110,17 +110,14 @@ If you want to create a jar with multiple versions we have the following options
3. Build for all Apache Spark versions, CDH and Databricks with no SNAPSHOT versions of Spark, only released. Use `-PnoSnaphsotsWithDatabricks`.
4. Build for all Apache Spark versions, CDH and Databricks including SNAPSHOT versions of Spark we have supported for. Use `-PsnapshotsWithDatabricks`
5. Build for an arbitrary combination of comma-separated build versions using `-Dincluded_buildvers=<CSV list of build versions>`.
E.g., `-Dincluded_buildvers=312,330`
E.g., `-Dincluded_buildvers=320,330`

You must first build each of the versions of Spark and then build one final time using the profile for the option you want.

You can also install some manually and build a combined jar. For instance to build non-snapshot versions:

```shell script
mvn clean
mvn -Dbuildver=311 install -Drat.skip=true -DskipTests
mvn -Dbuildver=312 install -Drat.skip=true -DskipTests
mvn -Dbuildver=313 install -Drat.skip=true -DskipTests
mvn -Dbuildver=320 install -Drat.skip=true -DskipTests
mvn -Dbuildver=321 install -Drat.skip=true -DskipTests
mvn -Dbuildver=321cdh install -Drat.skip=true -DskipTests
Expand All @@ -130,15 +127,15 @@ mvn -pl dist -PnoSnapshots package -DskipTests
Verify that shim-specific classes are hidden from a conventional classloader.

```bash
$ javap -cp dist/target/rapids-4-spark_2.12-24.06.0-cuda11.jar com.nvidia.spark.rapids.shims.SparkShimImpl
$ javap -cp dist/target/rapids-4-spark_2.12-24.08.0-cuda11.jar com.nvidia.spark.rapids.shims.SparkShimImpl
Error: class not found: com.nvidia.spark.rapids.shims.SparkShimImpl
```

However, its bytecode can be loaded if prefixed with `spark3XY` not contained in the package name

```bash
$ javap -cp dist/target/rapids-4-spark_2.12-24.06.0-cuda11.jar spark320.com.nvidia.spark.rapids.shims.SparkShimImpl | head -2
Warning: File dist/target/rapids-4-spark_2.12-24.06.0-cuda11.jar(/spark320/com/nvidia/spark/rapids/shims/SparkShimImpl.class) does not contain class spark320.com.nvidia.spark.rapids.shims.SparkShimImpl
$ javap -cp dist/target/rapids-4-spark_2.12-24.08.0-cuda11.jar spark320.com.nvidia.spark.rapids.shims.SparkShimImpl | head -2
Warning: File dist/target/rapids-4-spark_2.12-24.08.0-cuda11.jar(/spark320/com/nvidia/spark/rapids/shims/SparkShimImpl.class) does not contain class spark320.com.nvidia.spark.rapids.shims.SparkShimImpl
Compiled from "SparkShims.scala"
public final class com.nvidia.spark.rapids.shims.SparkShimImpl {
```
Expand All @@ -150,9 +147,9 @@ There is a build script `build/buildall` that automates the local build process.

By default, it builds everything that is needed to create a distribution jar for all released (noSnapshots) Spark versions except for Databricks. Other profiles that you can pass using `--profile=<distribution profile>` include
- `snapshots` that includes all released (noSnapshots) and snapshots Spark versions except for Databricks
- `minimumFeatureVersionMix` that currently includes 321cdh, 312, 320, 330 is recommended for catching incompatibilities already in the local development cycle
- `minimumFeatureVersionMix` that currently includes 321cdh, 320, 330 is recommended for catching incompatibilities already in the local development cycle

For initial quick iterations we can use `--profile=<buildver>` to build a single-shim version. e.g., `--profile=311` for Spark 3.1.1.
For initial quick iterations we can use `--profile=<buildver>` to build a single-shim version. e.g., `--profile=320` for Spark 3.2.0.

The option `--module=<module>` allows to limit the number of build steps. When iterating, we often don't have the need for the entire build. We may be interested in building everything necessary just to run integration tests (`--module=integration_tests`), or we may want to just rebuild the distribution jar (`--module=dist`)

Expand Down Expand Up @@ -181,7 +178,7 @@ mvn package -pl dist -am -Dbuildver=340 -DallowConventionalDistJar=true
Verify `com.nvidia.spark.rapids.shims.SparkShimImpl` is conventionally loadable:

```bash
$ javap -cp dist/target/rapids-4-spark_2.12-24.06.0-cuda11.jar com.nvidia.spark.rapids.shims.SparkShimImpl | head -2
$ javap -cp dist/target/rapids-4-spark_2.12-24.08.0-cuda11.jar com.nvidia.spark.rapids.shims.SparkShimImpl | head -2
Compiled from "SparkShims.scala"
public final class com.nvidia.spark.rapids.shims.SparkShimImpl {
```
Expand All @@ -201,7 +198,7 @@ NOTE: Build process does not require an ARM machine, so if you want to build the
on X86 machine, please also add `-DskipTests` in commands.

```bash
mvn clean verify -Dbuildver=311 -Parm64
mvn clean verify -Dbuildver=320 -Parm64
```

### Iterative development during local testing
Expand Down Expand Up @@ -377,7 +374,7 @@ the symlink `.bloop` to point to the corresponding directory `.bloop-spark3XY`

Example usage:
```Bash
./build/buildall --generate-bloop --profile=311,330
./build/buildall --generate-bloop --profile=320,330
rm -vf .bloop
ln -s .bloop-spark330 .bloop
```
Expand Down Expand Up @@ -414,7 +411,7 @@ Install [Scala Metals extension](https://scalameta.org/metals/docs/editors/vscod
either locally or into a Remote-SSH extension destination depending on your target environment.
When your project folder is open in VS Code, it may prompt you to import Maven project.
IMPORTANT: always decline with "Don't ask again", otherwise it will overwrite the Bloop projects
generated with the default `311` profile. If you need to use a different profile, always rerun the
generated with the default `320` profile. If you need to use a different profile, always rerun the
command above manually. When regenerating projects it's recommended to proceed to Metals
"Build commands" View, and click:
1. "Restart build server"
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ as a `provided` dependency.
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark_2.12</artifactId>
<version>24.06.0</version>
<version>24.08.0</version>
<scope>provided</scope>
</dependency>
```
78 changes: 7 additions & 71 deletions aggregator/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,13 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-jdk-profiles_2.12</artifactId>
<version>24.06.0</version>
<version>24.08.0</version>
<relativePath>../jdk-profiles/pom.xml</relativePath>
</parent>
<artifactId>rapids-4-spark-aggregator_2.12</artifactId>
<name>RAPIDS Accelerator for Apache Spark Aggregator</name>
<description>Creates an aggregated shaded package of the RAPIDS plugin for Apache Spark</description>
<version>24.06.0</version>
<version>24.08.0</version>

<properties>
<rapids.module>aggregator</rapids.module>
Expand Down Expand Up @@ -94,6 +94,10 @@
<pattern>com.google.flatbuffers</pattern>
<shadedPattern>${rapids.shade.package}.com.google.flatbuffers</shadedPattern>
</relocation>
<relocation>
<pattern>org.roaringbitmap</pattern>
<shadedPattern>${rapids.shade.package}.org.roaringbitmap</shadedPattern>
</relocation>
</relocations>
<filters>
<filter>
Expand Down Expand Up @@ -248,79 +252,11 @@

<profiles>
<profile>
<id>release311</id>
<id>release320</id>
<activation>
<!-- #if scala-2.12 -->
<activeByDefault>true</activeByDefault>
<!-- #endif scala-2.12 -->
<property>
<name>buildver</name>
<value>311</value>
</property>
</activation>
<dependencies>
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-delta-stub_${scala.binary.version}</artifactId>
<version>${project.version}</version>
<classifier>${spark.version.classifier}</classifier>
</dependency>
</dependencies>
</profile>
<profile>
<id>release312</id>
<activation>
<property>
<name>buildver</name>
<value>312</value>
</property>
</activation>
<dependencies>
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-delta-stub_${scala.binary.version}</artifactId>
<version>${project.version}</version>
<classifier>${spark.version.classifier}</classifier>
</dependency>
</dependencies>
</profile>
<profile>
<id>release313</id>
<activation>
<property>
<name>buildver</name>
<value>313</value>
</property>
</activation>
<dependencies>
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-delta-stub_${scala.binary.version}</artifactId>
<version>${project.version}</version>
<classifier>${spark.version.classifier}</classifier>
</dependency>
</dependencies>
</profile>
<profile>
<id>release314</id>
<activation>
<property>
<name>buildver</name>
<value>314</value>
</property>
</activation>
<dependencies>
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-delta-stub_${scala.binary.version}</artifactId>
<version>${project.version}</version>
<classifier>${spark.version.classifier}</classifier>
</dependency>
</dependencies>
</profile>
<profile>
<id>release320</id>
<activation>
<property>
<name>buildver</name>
<value>320</value>
Expand Down
2 changes: 1 addition & 1 deletion api_validation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ cd api_validation
sh auditAllVersions.sh

// To run script on particular version we can use profile
mvn scala:run -P spark311
mvn scala:run -P spark320
```

# Output
Expand Down
4 changes: 2 additions & 2 deletions api_validation/auditAllVersions.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#!/bin/bash
# Copyright (c) 2020-2022, NVIDIA CORPORATION.
# Copyright (c) 2020-2024, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand All @@ -14,4 +14,4 @@
# limitations under the License.
set -ex

mvn scala:run -P spark311
mvn scala:run -P spark320
4 changes: 2 additions & 2 deletions api_validation/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,11 @@
<parent>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-shim-deps-parent_2.12</artifactId>
<version>24.06.0</version>
<version>24.08.0</version>
<relativePath>../shim-deps/pom.xml</relativePath>
</parent>
<artifactId>rapids-4-spark-api-validation_2.12</artifactId>
<version>24.06.0</version>
<version>24.08.0</version>

<properties>
<rapids.module>api_validation</rapids.module>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright (c) 2020-2023, NVIDIA CORPORATION.
* Copyright (c) 2020-2024, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -69,7 +69,7 @@ object ApiValidation extends Logging {
val gpuKeys = gpuExecs.keys
var printNewline = false

val sparkToShimMap = Map("3.1.1" -> "spark311")
val sparkToShimMap = Map("3.2.0" -> "spark320")
val sparkVersion = ShimLoader.getShimVersion.toString
val shimVersion = sparkToShimMap(sparkVersion)

Expand Down
Loading