forked from NVIDIA/spark-rapids
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge branch-24.04 into main [skip ci] #55
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
Signed-off-by: Tim Liu <[email protected]>
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
Signed-off-by: Robert (Bobby) Evans <[email protected]>
To fix: NVIDIA#10256 Bump up dependency version to 24.04.0-SNAPSHOT Signed-off-by: Tim Liu <[email protected]>
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
Signed-off-by: Jason Lowe <[email protected]>
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
[auto-merge] branch-24.02 to branch-24.04 [skip ci] [bot]
Signed-off-by: Tim Liu <[email protected]>
* Distinct inner join Signed-off-by: Jason Lowe <[email protected]> * Distinct left join Signed-off-by: Jason Lowe <[email protected]> * Update to new API * Fix test --------- Signed-off-by: Jason Lowe <[email protected]>
Signed-off-by: Partho Sarthi <[email protected]>
…ment (NVIDIA#10564) * WIP Signed-off-by: Gera Shegalov <[email protected]> * WIP Signed-off-by: Gera Shegalov <[email protected]> * Enable specifying the pytest using file_or_dir args ```bash TEST_PARALLEL=0 \ SPARK_HOME=~/dist/spark-3.1.1-bin-hadoop3.2 \ TEST_FILE_OR_DIR=~/gits/NVIDIA/spark-rapids/integration_tests/src/main/python/arithmetic_ops_test.py::test_addition \ ./integration_tests/run_pyspark_from_build.sh --collect-only <Module src/main/python/arithmetic_ops_test.py> <Function test_addition[Byte]> <Function test_addition[Short]> <Function test_addition[Integer]> <Function test_addition[Long]> <Function test_addition[Float]> <Function test_addition[Double]> <Function test_addition[Decimal(7,3)]> <Function test_addition[Decimal(12,2)]> <Function test_addition[Decimal(18,0)]> <Function test_addition[Decimal(20,2)]> <Function test_addition[Decimal(30,2)]> <Function test_addition[Decimal(36,5)]> <Function test_addition[Decimal(38,10)]> <Function test_addition[Decimal(38,0)]> <Function test_addition[Decimal(7,7)]> <Function test_addition[Decimal(7,-3)]> <Function test_addition[Decimal(36,-5)]> <Function test_addition[Decimal(38,-10)]> ``` Signed-off-by: Gera Shegalov <[email protected]> Co-authored-by: Raza Jafri <[email protected]> * Changing to TESTS=module::method Signed-off-by: Gera Shegalov <[email protected]> --------- Signed-off-by: Gera Shegalov <[email protected]> Co-authored-by: Raza Jafri <[email protected]>
…VIDIA#10562) * Fix test_spark_from_json_date_with_format when run in a non-UTC TZ Signed-off-by: Robert (Bobby) Evans <[email protected]> * Copyright year --------- Signed-off-by: Robert (Bobby) Evans <[email protected]>
…0542) Signed-off-by: Andy Grove <[email protected]> Signed-off-by: Robert (Bobby) Evans <[email protected]> Co-authored-by: Andy Grove <[email protected]>
* WindowGroupLimit support for [databricks]. Fixes NVIDIA#10531. This is a followup to NVIDIA#10500, which added support to push down window-group-limit filters before the shuffle phase. NVIDIA#10500 inadvertently neglected to ensure that the optimization works on Databricks. (It turns out that window-group-limit was cherry-picked into Databricks 13.3, despite the nominal Spark version being `3.4.1`.) This change ensures that the same optimization is available on Databricks 13.3 (and beyond). --------- Signed-off-by: MithunR <[email protected]>
Signed-off-by: Alessandro Bellina <[email protected]>
…DIA#10572) Signed-off-by: Robert (Bobby) Evans <[email protected]>
Signed-off-by: Jason Lowe <[email protected]>
* Add in small optimization for instr comparison Signed-off-by: Robert (Bobby) Evans <[email protected]> * Review Comments --------- Signed-off-by: Robert (Bobby) Evans <[email protected]>
Fixes NVIDIA#10430. This PR ensures that Spark RAPIDS jobs are executed on supported GPU architectures without relying on manual configuration. ### Changes: 1. Processes `gpu_architectures` property from the `*version-info.properties` file generated by the native builds. 2. Verifies if the user is running the job on an architecture supported by the cuDF and JNI libraries and throws an exception if the architecture is unsupported. ### Testing Tested on a Dataproc VM running on Nvidia P4 (GPU Architecture 6.1) ``` 24/03/06 17:44:58 WARN RapidsPluginUtils: spark.rapids.sql.explain is set to `NOT_ON_GPU`. Set it to 'NONE' to suppress the diagnostics logging about the query placement on the GPU. 24/03/06 17:45:10 ERROR RapidsExecutorPlugin: Exception in the executor plugin, shutting down! java.lang.RuntimeException: Device architecture 61 is unsupported. Minimum supported architecture: 75. at com.nvidia.spark.rapids.RapidsPluginUtils$.checkGpuArchitectureInternal(Plugin.scala:366) at com.nvidia.spark.rapids.RapidsPluginUtils$.checkGpuArchitecture(Plugin.scala:375) at com.nvidia.spark.rapids.RapidsExecutorPlugin.init(Plugin.scala:461) ``` ### Related PR * NVIDIA/spark-rapids-jni#1840 * Add conf for minimum supported CUDA and error handling Signed-off-by: Partho Sarthi <[email protected]> * Revert "Add conf for minimum supported CUDA and error handling" This reverts commit 7b8eaea. * Verify the GPU architecture is supported by the plugin libraries Signed-off-by: Partho Sarthi <[email protected]> * Use semi-colon as delimiter and use intersection of supported gpu architectures Signed-off-by: Partho Sarthi <[email protected]> * Allow for compatibility with major architectures Signed-off-by: Partho Sarthi <[email protected]> * Check for version as integers Signed-off-by: Partho Sarthi <[email protected]> * Modify compatibility check for same major version and same or higher minor version Signed-off-by: Partho Sarthi <[email protected]> * Add a config to skip verification and refactor checking Signed-off-by: Partho Sarthi <[email protected]> * Update RapidsConf.scala Co-authored-by: Jason Lowe <[email protected]> * Update verification logic Signed-off-by: Partho Sarthi <[email protected]> * Update warning message Signed-off-by: Partho Sarthi <[email protected]> * Add unit tests and update warning message. Signed-off-by: Partho Sarthi <[email protected]> * Update exception class Signed-off-by: Partho Sarthi <[email protected]> * Address review comments Signed-off-by: Partho Sarthi <[email protected]> --------- Signed-off-by: Partho Sarthi <[email protected]> Co-authored-by: Jason Lowe <[email protected]>
Signed-off-by: YanxuanLiu <[email protected]>
…VIDIA#10575) Signed-off-by: Robert (Bobby) Evans <[email protected]>
Update CI script to support building and deploying using the same CUDA classifier Signed-off-by: Tim Liu <[email protected]>
Signed-off-by: Yinqing Hao <[email protected]>
…DIA#10602) * Disable InMemoryTableScanExec support by default for Spark 3.5+ * Signing off Signed-off-by: Raza Jafri <[email protected]> * Revert "Remove InMemoryTableScanExec support for Spark 3.5+" This reverts commit 8d38f8a. * Disable InMemoryTableScanExec by default for Spark versions 3.5+ --------- Signed-off-by: Raza Jafri <[email protected]>
…p command (NVIDIA#10599) * Call globStatus directly via PY4J instead of 'hadoop fs' Signed-off-by: Yinqing Hao <[email protected]> * Use pathlib.Path to handle local path only Signed-off-by: Yinqing Hao <[email protected]> * Use os.path.join to concatenate hdfs path and pattern Signed-off-by: Yinqing Hao <[email protected]> --------- Signed-off-by: Yinqing Hao <[email protected]>
* Do not replace TableCacheQueryStageExec * signoff Signed-off-by: Andy Grove <[email protected]> --------- Signed-off-by: Andy Grove <[email protected]> Co-authored-by: Raza Jafri <[email protected]>
Signed-off-by: Alessandro Bellina <[email protected]>
"shared-libs" had not been used since we switched to blossom Jenkins. We had ever used "shared-libs" before, but now only need "blossom-lib". Signed-off-by: Tim Liu <[email protected]>
* Pass metadata extractors to FileScanRDD * Signing off Signed-off-by: Raza Jafri <[email protected]> * addressed review comments * updated copyrights manually --------- Signed-off-by: Raza Jafri <[email protected]>
) Signed-off-by: Tim Liu <[email protected]>
* Add fix for removing internal metadata information from 350 shim Signed-off-by: Partho Sarthi <[email protected]> * Create a shim helper class wth the relevant change instead of duplicating code Signed-off-by: Partho Sarthi <[email protected]> * Remove extra whitespace Signed-off-by: Partho Sarthi <[email protected]> --------- Signed-off-by: Partho Sarthi <[email protected]>
* Use new kernel for getJsonObject Signed-off-by: Haoyang Li <[email protected]> * Use table to pass parsed path Signed-off-by: Haoyang Li <[email protected]> * use list/vector of instruction objects Signed-off-by: Haoyang Li <[email protected]> * fallback when nested too long Signed-off-by: Haoyang Li <[email protected]> * cancel xfail cases Signed-off-by: Haoyang Li <[email protected]> * cancel xfail cases Signed-off-by: Haoyang Li <[email protected]> * generated and modified docs Signed-off-by: Haoyang Li <[email protected]> * wip Signed-off-by: Haoyang Li <[email protected]> * wip Signed-off-by: Haoyang Li <[email protected]> * apply jni change and remove xpass Signed-off-by: Haoyang Li <[email protected]> * Adds test cases Signed-off-by: Haoyang Li <[email protected]> --------- Signed-off-by: Haoyang Li <[email protected]>
Signed-off-by: jenkins <jenkins@localhost>
[skip ci] as branch-24.04 already PASS the build |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Change version to 24.04.0
Note: merge this PR with Create a merge commit to merge