-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use cuda::proclaim_return_type on device lambdas #1662
Use cuda::proclaim_return_type on device lambdas #1662
Conversation
Signed-off-by: Mike Wilson <[email protected]>
Signed-off-by: Mike Wilson <[email protected]>
build |
src/main/cpp/src/parse_uri.cu
Outdated
@@ -164,11 +166,11 @@ bool __device__ validate_ipv6(string_view s) | |||
int address_char_count{0}; | |||
bool address_has_hex{false}; | |||
|
|||
auto const leading_double_colon = [&]() { | |||
auto const leading_double_colon = cuda::proclaim_return_type<bool>([&]() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not device lambda so don't need to change here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, I just realized that this lambda is unused. The only place that refers to it is
// if (colon_count == max_colons && !leading_double_colon) { return false; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I wrote it all to match the RFC, but Spark/Java don't follow it exactly and I need to match them. I documented the differences by commenting out the original path. I don't know the take on this and if I should remove it or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you commented out the caller then you should also comment out the function too, since it is now unused.
Signed-off-by: Mike Wilson <[email protected]>
FYI, CCCL 2 has been merged into cudf (rapidsai/cudf#14576). You can test locally this PR with the latest cudf commit to make sure this compiles successfully. |
For the record, we'll also need to change diff --git a/src/main/cpp/CMakeLists.txt b/src/main/cpp/CMakeLists.txt
index 18c0cd1..a905297 100644
--- a/src/main/cpp/CMakeLists.txt
+++ b/src/main/cpp/CMakeLists.txt
@@ -94,11 +94,8 @@ include(cmake/Modules/ConfigureCUDA.cmake) # set other CUDA compilation flags
# ##################################################################################################
# * dependencies ----------------------------------------------------------------------------------
-# find libcu++
-include(${rapids-cmake-dir}/cpm/libcudacxx.cmake)
-
-# find thrust/cub
-include(${CUDF_DIR}/cpp/cmake/thirdparty/get_thrust.cmake)
+# find CCCL
+include(${CUDF_DIR}/cpp/cmake/thirdparty/get_cccl.cmake)
# JNI
find_package(JNI REQUIRED) |
Signed-off-by: Mike Wilson <[email protected]>
I think the changes in this PR should be good to go. They now include the changes I had independently. (i.e. changes to The changes in this PR are more comprehensive, if not yet strictly required. I do see that we have trouble running |
Yes the issue I was trying to solve (#1567) now shows up---my previous fix doesn't work anymore. We now need to fix it some other way. |
My conservative vote would be not to change the But we need to, there might be value in commenting out |
I am confused @mythrocks, why do you want to hold? |
I'm 👍 on this change. We might need to rebase it for the latest in |
@abellina: Don't mind me. I'm irrationally apprehensive about disruptive changes near the holidays. |
I feel the same way. I don't see the value in checking in something and then going on holiday. So many are out until the new year already. If we run across some major issue, we may not be around to fix or triage it. I'd prefer to leave things running until January than check this in and have unknown issues crop up during that reduced productivity phase. I'm all for fast-following cudf, but I don't see that making this change to CCCL 2.2 now vs in 2 weeks makes a huge difference and it comes with a potential for extended disruptions. This ship may have already sailed though since even with a pinned cudf, we are pulling in the latest rmm and our builds are broken. |
…claim_return_type
Done |
build |
Signed-off-by: Mike Wilson <[email protected]>
We need to wait until tomorrow so the changes in cudf can propagate here to build. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the changes thus far look good to me. One comment on size_type
vs. int32_t
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, save for what @bdice suggested regarding size_type
vs int32_t
.
Signed-off-by: Mike Wilson <[email protected]>
Co-authored-by: Bradley Dice <[email protected]>
build |
Depends on #1668. |
Signed-off-by: Nghia Truong <[email protected]>
Signed-off-by: Nghia Truong <[email protected]>
Signed-off-by: Mike Wilson <[email protected]>
Signed-off-by: Nghia Truong <[email protected]>
Signed-off-by: Nghia Truong <[email protected]>
# Conflicts: # src/main/cpp/src/row_conversion.cu
build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CCCL 2.2.0 C++ and CMake changes look good to me.
This reverts commit 763406c. # Conflicts: # src/main/cpp/benchmarks/row_conversion.cpp # src/main/cpp/src/bloom_filter.cu # src/main/cpp/src/map_utils.cu # src/main/cpp/src/xxhash64.cu # thirdparty/cudf Signed-off-by: Nghia Truong <[email protected]>
* Revert "Use cuda::proclaim_return_type on device lambdas (#1662)" This reverts commit 763406c. # Conflicts: # src/main/cpp/benchmarks/row_conversion.cpp # src/main/cpp/src/bloom_filter.cu # src/main/cpp/src/map_utils.cu # src/main/cpp/src/xxhash64.cu # thirdparty/cudf Signed-off-by: Nghia Truong <[email protected]> * Remove redundant header Signed-off-by: Nghia Truong <[email protected]> * Update copyright year Signed-off-by: Nghia Truong <[email protected]> --------- Signed-off-by: Nghia Truong <[email protected]>
This PR makes spark-rapids-jni compatible with Thrust 2 by adding
cuda::proclaim_return_type
. This is following suit with cudf, who recently did this work with rapidsai/cudf#14577closes #1639