Skip to content

[GLUTEN-6920][CORE] Redesign and move trait GlutenPlan to gluten-core #3224

[GLUTEN-6920][CORE] Redesign and move trait GlutenPlan to gluten-core

[GLUTEN-6920][CORE] Redesign and move trait GlutenPlan to gluten-core #3224

Triggered via pull request November 26, 2024 09:09
@zhztheplayerzhztheplayer
synchronize #8036
Status Success
Total duration 11s
Artifacts

labeler.yml

on: pull_request_target
Label pull requests
4s
Label pull requests
Fit to window
Zoom out
Zoom in

Annotations

14 errors
ScalarFunctionsValidateSuiteRasOn.Test input_file_name function: org/apache/gluten/execution/ScalarFunctionsValidateSuiteRasOn#L1
executedPlan.exists(((plan: org.apache.spark.sql.execution.SparkPlan) => tag.runtimeClass.isInstance(plan))) was false Expect ProjectExecTransformer exists in executedPlan: CollectLimit 100 +- *(1) Project [input_file_name#66779 AS input_file_name()#66775, l_orderkey#64322L] +- VeloxColumnarToRow +- ^(3442) BatchScanExecTransformer[l_orderkey#64322L, input_file_name#66779] ParquetScan DataFilters: [], Format: parquet, Location: InMemoryFileIndex(1 paths)[file:/__w/incubator-gluten/incubator-gluten/backends-velox/target/scal..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<l_orderkey:bigint>, PushedFilters: [] RuntimeFilters: []
VeloxTPCHV1BhjRasSuite.TPC-H q17: org/apache/gluten/execution/VeloxTPCHV1BhjRasSuite#L1
Mismatch for query 17 Actual Plan path: /tmp/tpch-approved-plan/v1-bhj-ras/spark32/17.txt Golden Plan path: /__w/incubator-gluten/incubator-gluten/backends-velox/target/scala-2.12/test-classes/tpch-approved-plan/v1-bhj-ras/spark32/17.txt
UDFPartialProjectSuiteRasOn.test plus_one with many columns: org/apache/gluten/expression/UDFPartialProjectSuiteRasOn#L1
executedPlan.exists(((plan: org.apache.spark.sql.execution.SparkPlan) => tag.runtimeClass.isInstance(plan))) was false Expect ColumnarPartialProjectExec exists in executedPlan: HashAggregate(keys=[], functions=[sum((if (isnull(l_orderkey#75549L)) null else plus_one(knownnotnull(l_orderkey#75549L)) + cast(hash(l_partkey#75550L, 42) as bigint)))], output=[sum((plus_one(cast(l_orderkey as bigint)) + hash(l_partkey)))#75602L]) +- VeloxColumnarToRow +- ^(4362) FilterExecTransformer ((l_orderkey#75549L < cast(3 as bigint)) AND isnotnull(l_orderkey#75549L)) +- ^(4362) BatchScanExecTransformer[l_orderkey#75549L, l_partkey#75550L] ParquetScan DataFilters: [(l_orderkey#75549L < cast(3 as bigint)), isnotnull(l_orderkey#75549L)], Format: parquet, Location: InMemoryFileIndex(1 paths)[file:/__w/incubator-gluten/incubator-gluten/backends-velox/target/scal..., PartitionFilters: [], PushedFilters: [IsNotNull(l_orderkey)], ReadSchema: struct<l_orderkey:bigint,l_partkey:bigint>, PushedFilters: [IsNotNull(l_orderkey)] RuntimeFilters: []
UDFPartialProjectSuiteRasOn.udf in agg simple: org/apache/gluten/expression/UDFPartialProjectSuiteRasOn#L1
executedPlan.exists(((plan: org.apache.spark.sql.execution.SparkPlan) => tag.runtimeClass.isInstance(plan))) was false Expect ColumnarPartialProjectExec exists in executedPlan: HashAggregate(keys=[], functions=[sum((hash(if (isnull(cast(l_extendedprice#75554 as bigint))) null else plus_one(knownnotnull(cast(l_extendedprice#75554 as bigint))), 42) + hash(l_orderkey#75549L, 42)))], output=[revenue#75710L]) +- VeloxColumnarToRow +- ^(4373) BatchScanExecTransformer[l_orderkey#75549L, l_extendedprice#75554] ParquetScan DataFilters: [], Format: parquet, Location: InMemoryFileIndex(1 paths)[file:/__w/incubator-gluten/incubator-gluten/backends-velox/target/scal..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<l_orderkey:bigint,l_extendedprice:decimal(12,2)>, PushedFilters: [] RuntimeFilters: []
UDFPartialProjectSuiteRasOn.udf in agg: org/apache/gluten/expression/UDFPartialProjectSuiteRasOn#L1
executedPlan.exists(((plan: org.apache.spark.sql.execution.SparkPlan) => tag.runtimeClass.isInstance(plan))) was false Expect ColumnarPartialProjectExec exists in executedPlan: HashAggregate(keys=[], functions=[sum(CheckOverflow((promote_precision(cast(CheckOverflow((promote_precision(cast(CheckOverflow((promote_precision(cast(cast(hash(if (isnull(cast(l_extendedprice#75554 as bigint))) null else plus_one(knownnotnull(cast(l_extendedprice#75554 as bigint))), 42) as decimal(10,0)) as decimal(12,2))) * promote_precision(l_discount#75555)), DecimalType(23,2), true) as decimal(24,2))) + promote_precision(cast(cast(hash(l_orderkey#75549L, 42) as decimal(10,0)) as decimal(24,2)))), DecimalType(24,2), true) as decimal(25,2))) + promote_precision(cast(cast(hash(l_comment#75564, 42) as decimal(10,0)) as decimal(25,2)))), DecimalType(25,2), true))], output=[revenue#75755]) +- VeloxColumnarToRow +- ^(4375) BatchScanExecTransformer[l_orderkey#75549L, l_extendedprice#75554, l_discount#75555, l_comment#75564] ParquetScan DataFilters: [], Format: parquet, Location: InMemoryFileIndex(1 paths)[file:/__w/incubator-gluten/incubator-gluten/backends-velox/target/scal..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<l_orderkey:bigint,l_extendedprice:decimal(12,2),l_discount:decimal(12,2),l_comment:string>, PushedFilters: [] RuntimeFilters: []
ScalarFunctionsValidateSuiteRasOn.Test input_file_name function: org/apache/gluten/execution/ScalarFunctionsValidateSuiteRasOn#L1
executedPlan.exists(((plan: org.apache.spark.sql.execution.SparkPlan) => tag.runtimeClass.isInstance(plan))) was false Expect ProjectExecTransformer exists in executedPlan: CollectLimit 100 +- *(1) Project [input_file_name#67634 AS input_file_name()#67630, l_orderkey#65140L] +- VeloxColumnarToRow +- ^(3546) BatchScanExecTransformer[l_orderkey#65140L, input_file_name#67634] ParquetScan DataFilters: [], Format: parquet, Location: InMemoryFileIndex(1 paths)[file:/__w/incubator-gluten/incubator-gluten/backends-velox/target/scal..., PartitionFilters: [], PushedAggregation: [], PushedFilters: [], PushedGroupBy: [], ReadSchema: struct<l_orderkey:bigint>, PushedFilters: [], PushedAggregation: [], PushedGroupBy: [] RuntimeFilters: []
VeloxTPCHV1BhjRasSuite.TPC-H q17: org/apache/gluten/execution/VeloxTPCHV1BhjRasSuite#L1
Mismatch for query 17 Actual Plan path: /tmp/tpch-approved-plan/v1-bhj-ras/spark33/17.txt Golden Plan path: /__w/incubator-gluten/incubator-gluten/backends-velox/target/scala-2.12/test-classes/tpch-approved-plan/v1-bhj-ras/spark33/17.txt
UDFPartialProjectSuiteRasOn.test plus_one with many columns: org/apache/gluten/expression/UDFPartialProjectSuiteRasOn#L1
executedPlan.exists(((plan: org.apache.spark.sql.execution.SparkPlan) => tag.runtimeClass.isInstance(plan))) was false Expect ColumnarPartialProjectExec exists in executedPlan: HashAggregate(keys=[], functions=[sum((if (isnull(l_orderkey#76479L)) null else plus_one(knownnotnull(l_orderkey#76479L)) + cast(hash(l_partkey#76480L, 42) as bigint)))], output=[sum((plus_one(cast(l_orderkey as bigint)) + hash(l_partkey)))#76532L]) +- VeloxColumnarToRow +- ^(4454) FilterExecTransformer (isnotnull(l_orderkey#76479L) AND (l_orderkey#76479L < cast(3 as bigint))) +- ^(4454) BatchScanExecTransformer[l_orderkey#76479L, l_partkey#76480L] ParquetScan DataFilters: [isnotnull(l_orderkey#76479L), (l_orderkey#76479L < cast(3 as bigint))], Format: parquet, Location: InMemoryFileIndex(1 paths)[file:/__w/incubator-gluten/incubator-gluten/backends-velox/target/scal..., PartitionFilters: [], PushedAggregation: [], PushedFilters: [IsNotNull(l_orderkey)], PushedGroupBy: [], ReadSchema: struct<l_orderkey:bigint,l_partkey:bigint>, PushedFilters: [IsNotNull(l_orderkey)], PushedAggregation: [], PushedGroupBy: [] RuntimeFilters: []
UDFPartialProjectSuiteRasOn.udf in agg simple: org/apache/gluten/expression/UDFPartialProjectSuiteRasOn#L1
executedPlan.exists(((plan: org.apache.spark.sql.execution.SparkPlan) => tag.runtimeClass.isInstance(plan))) was false Expect ColumnarPartialProjectExec exists in executedPlan: HashAggregate(keys=[], functions=[sum((hash(if (isnull(cast(l_extendedprice#76484 as bigint))) null else plus_one(knownnotnull(cast(l_extendedprice#76484 as bigint))), 42) + hash(l_orderkey#76479L, 42)))], output=[revenue#76640L]) +- VeloxColumnarToRow +- ^(4465) BatchScanExecTransformer[l_orderkey#76479L, l_extendedprice#76484] ParquetScan DataFilters: [], Format: parquet, Location: InMemoryFileIndex(1 paths)[file:/__w/incubator-gluten/incubator-gluten/backends-velox/target/scal..., PartitionFilters: [], PushedAggregation: [], PushedFilters: [], PushedGroupBy: [], ReadSchema: struct<l_orderkey:bigint,l_extendedprice:decimal(12,2)>, PushedFilters: [], PushedAggregation: [], PushedGroupBy: [] RuntimeFilters: []
UDFPartialProjectSuiteRasOn.udf in agg: org/apache/gluten/expression/UDFPartialProjectSuiteRasOn#L1
executedPlan.exists(((plan: org.apache.spark.sql.execution.SparkPlan) => tag.runtimeClass.isInstance(plan))) was false Expect ColumnarPartialProjectExec exists in executedPlan: HashAggregate(keys=[], functions=[sum(CheckOverflow((promote_precision(cast(CheckOverflow((promote_precision(cast(CheckOverflow((promote_precision(cast(hash(if (isnull(cast(l_extendedprice#76484 as bigint))) null else plus_one(knownnotnull(cast(l_extendedprice#76484 as bigint))), 42) as decimal(12,2))) * promote_precision(l_discount#76485)), DecimalType(23,2)) as decimal(24,2))) + promote_precision(cast(hash(l_orderkey#76479L, 42) as decimal(24,2)))), DecimalType(24,2)) as decimal(25,2))) + promote_precision(cast(hash(l_comment#76494, 42) as decimal(25,2)))), DecimalType(25,2)))], output=[revenue#76685]) +- VeloxColumnarToRow +- ^(4467) BatchScanExecTransformer[l_orderkey#76479L, l_extendedprice#76484, l_discount#76485, l_comment#76494] ParquetScan DataFilters: [], Format: parquet, Location: InMemoryFileIndex(1 paths)[file:/__w/incubator-gluten/incubator-gluten/backends-velox/target/scal..., PartitionFilters: [], PushedAggregation: [], PushedFilters: [], PushedGroupBy: [], ReadSchema: struct<l_orderkey:bigint,l_extendedprice:decimal(12,2),l_discount:decimal(12,2),l_comment:string>, PushedFilters: [], PushedAggregation: [], PushedGroupBy: [] RuntimeFilters: []
ArrowCsvScanSuiteV1.insert into select from csv: org/apache/gluten/execution/ArrowCsvScanSuiteV1#L1
[INTERNAL_ERROR] The Spark SQL phase planning failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
VeloxHashJoinSuite.Reuse broadcast exchange for different build keys with same table: org/apache/gluten/execution/VeloxHashJoinSuite#L117
[INTERNAL_ERROR] The Spark SQL phase planning failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
VeloxHashJoinSuite.ColumnarBuildSideRelation transform support multiple key columns: org/apache/gluten/execution/VeloxHashJoinSuite#L152
[INTERNAL_ERROR] The Spark SQL phase planning failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
VeloxWindowExpressionSuite.collect_list / collect_set: org/apache/gluten/execution/VeloxWindowExpressionSuite#L91
[INTERNAL_ERROR] The Spark SQL phase planning failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.