Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix tests failures in orc_cast_test.py #11021

Closed
Tracked by #11004
razajafri opened this issue Jun 8, 2024 · 3 comments
Closed
Tracked by #11004

Fix tests failures in orc_cast_test.py #11021

razajafri opened this issue Jun 8, 2024 · 3 comments
Assignees
Labels
bug Something isn't working Spark 4.0+ Spark 4.0+ issues

Comments

@razajafri
Copy link
Collaborator

FAILED ../../../../integration_tests/src/main/python/orc_cast_test.py::test_casting_among_integer_types
@razajafri razajafri added bug Something isn't working ? - Needs Triage Need team to review and classify labels Jun 8, 2024
@razajafri razajafri added the Spark 4.0+ Spark 4.0+ issues label Jun 8, 2024
@mattahrens mattahrens removed the ? - Needs Triage Need team to review and classify label Jun 11, 2024
@mythrocks mythrocks self-assigned this Jun 12, 2024
@mythrocks
Copy link
Collaborator

This one is particularly interesting. The failure indicates a missing GPU metric:

2024-06-12 18:30:10,128 [Executor task launch worker for task 0.0 in stage 41.0 (TID 164)] ERROR org.apache.spark.executor.Executor - Exception in task 0.0 in stage 41.0 (TID 164)
java.util.NoSuchElementException: key not found: gpuDecodeTime
        at scala.collection.immutable.Map$EmptyMap$.apply(Map.scala:243) ~[scala-library-2.13.14.jar:?]
        at scala.collection.immutable.Map$EmptyMap$.apply(Map.scala:239) ~[scala-library-2.13.14.jar:?]
        at com.nvidia.spark.rapids.OrcTableReader.next(GpuOrcScan.scala:2884) ~[spark-shared/:?]
        at com.nvidia.spark.rapids.OrcTableReader.next(GpuOrcScan.scala:2859) ~[spark-shared/:?]
        at com.nvidia.spark.rapids.CachedGpuBatchIterator$.$anonfun$apply$1(GpuDataProducer.scala:159) ~[spark-shared/:?]

I'm double checking to see if this fails in non-Spark-4 builds.

@mythrocks mythrocks removed their assignment Jul 24, 2024
@mythrocks
Copy link
Collaborator

I've unassigned myself from this bug. This should be sorted out after the missing GPU metrics are addressed.

@rwlee rwlee self-assigned this Oct 14, 2024
@rwlee
Copy link
Contributor

rwlee commented Oct 17, 2024

This passes now that the gpuDecodeTime errors have been resolved -- #11429

@rwlee rwlee closed this as completed Oct 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Spark 4.0+ Spark 4.0+ issues
Projects
None yet
Development

No branches or pull requests

4 participants