Skip to content

Bringup reduction silicon tests that were previously unsupported #734

Bringup reduction silicon tests that were previously unsupported

Bringup reduction silicon tests that were previously unsupported #734

GitHub Actions / macos-latest MLIR Tests failed Sep 24, 2024 in 0s

108 tests run, 78 passed, 29 skipped, 1 failed.

Annotations

Check failure on line 45 in ttmlir/Silicon/TTNN

See this annotation in the file changed.

@github-actions github-actions / macos-latest MLIR Tests

ttmlir/Silicon/TTNN.simple_reductions.mlir

Exit Code: 1
Raw output
Exit Code: 1

Command Output (stderr):
--
RUN: at line 1: /Users/runner/work/tt-mlir/tt-mlir/build/bin/ttmlir-opt --ttir-to-ttnn-backend-pipeline="system-desc-path=" /Users/runner/work/tt-mlir/tt-mlir/test/ttmlir/Silicon/TTNN/simple_reductions.mlir > /Users/runner/work/tt-mlir/tt-mlir/build/test/ttmlir/Silicon/TTNN/Output/simple_reductions.mlir.tmp.mlir
+ /Users/runner/work/tt-mlir/tt-mlir/build/bin/ttmlir-opt --ttir-to-ttnn-backend-pipeline=system-desc-path= /Users/runner/work/tt-mlir/tt-mlir/test/ttmlir/Silicon/TTNN/simple_reductions.mlir
RUN: at line 2: /opt/ttmlir-toolchain/bin/FileCheck /Users/runner/work/tt-mlir/tt-mlir/test/ttmlir/Silicon/TTNN/simple_reductions.mlir --input-file=/Users/runner/work/tt-mlir/tt-mlir/build/test/ttmlir/Silicon/TTNN/Output/simple_reductions.mlir.tmp.mlir
+ /opt/ttmlir-toolchain/bin/FileCheck /Users/runner/work/tt-mlir/tt-mlir/test/ttmlir/Silicon/TTNN/simple_reductions.mlir --input-file=/Users/runner/work/tt-mlir/tt-mlir/build/test/ttmlir/Silicon/TTNN/Output/simple_reductions.mlir.tmp.mlir
/Users/runner/work/tt-mlir/tt-mlir/test/ttmlir/Silicon/TTNN/simple_reductions.mlir:25:12: error: CHECK: expected string not found in input
 // CHECK: %[[C:.*]] = "ttnn.sum"[[C:.*]]
           ^
/Users/runner/work/tt-mlir/tt-mlir/build/test/ttmlir/Silicon/TTNN/Output/simple_reductions.mlir.tmp.mlir:36:246: note: scanning from here
 %3 = "ttnn.empty"(%0) <{dtype = #tt.supportedDataTypes<bf16>, layout = #ttnn.layout<row_major>, memory_config = #ttnn.memory_config<<interleaved>, <dram>>, shape = #ttnn.shape<1x1x512>}> : (!tt.device<#device>) -> tensor<1x1x512xbf16, #layout3>
                                                                                                                                                                                                                                                     ^
/Users/runner/work/tt-mlir/tt-mlir/build/test/ttmlir/Silicon/TTNN/Output/simple_reductions.mlir.tmp.mlir:45:2: note: possible intended match here
 %3 = "ttnn.empty"(%0) <{dtype = #tt.supportedDataTypes<bf16>, layout = #ttnn.layout<row_major>, memory_config = #ttnn.memory_config<<interleaved>, <dram>>, shape = #ttnn.shape<1x32>}> : (!tt.device<#device>) -> tensor<1x32xbf16, #layout7>
 ^

Input file: /Users/runner/work/tt-mlir/tt-mlir/build/test/ttmlir/Silicon/TTNN/Output/simple_reductions.mlir.tmp.mlir
Check file: /Users/runner/work/tt-mlir/tt-mlir/test/ttmlir/Silicon/TTNN/simple_reductions.mlir

-dump-input=help explains the following input dump.

Input was:
<<<<<<
            .
            .
            .
           31:  } 
           32:  func.func @mean(%arg0: tensor<1x1x512x64xbf16, #layout>) -> tensor<1x1x512xbf16, #layout1> { 
           33:  %0 = "ttnn.get_device"() <{mesh_shape = #ttnn<mesh_shape 1x1>}> : () -> !tt.device<#device> 
           34:  %1 = "ttnn.to_layout"(%arg0, %0) <{layout = #ttnn.layout<tile>}> : (tensor<1x1x512x64xbf16, #layout>, !tt.device<#device>) -> tensor<1x1x512x64xbf16, #layout2> 
           35:  %2 = "ttnn.to_device"(%1, %0) <{memory_config = #ttnn.memory_config<<interleaved>, <dram>>}> : (tensor<1x1x512x64xbf16, #layout2>, !tt.device<#device>) -> tensor<1x1x512x64xbf16, #layout2> 
           36:  %3 = "ttnn.empty"(%0) <{dtype = #tt.supportedDataTypes<bf16>, layout = #ttnn.layout<row_major>, memory_config = #ttnn.memory_config<<interleaved>, <dram>>, shape = #ttnn.shape<1x1x512>}> : (!tt.device<#device>) -> tensor<1x1x512xbf16, #layout3> 
check:25'0                                                                                                                                                                                                                                                          X error: no match found
           37:  %4 = "ttnn.mean"(%2, %3) <{dim_arg = [-1 : i32], keep_dim = true}> : (tensor<1x1x512x64xbf16, #layout2>, tensor<1x1x512xbf16, #layout3>) -> tensor<1x1x512xbf16, #layout3> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           38:  %5 = "ttnn.to_memory_config"(%4, %0) : (tensor<1x1x512xbf16, #layout3>, !tt.device<#device>) -> tensor<1x1x512xbf16, #layout1> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           39:  return %5 : tensor<1x1x512xbf16, #layout1> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           40:  } 
check:25'0     ~~~
           41:  func.func @mean_last_2_dims(%arg0: tensor<1x32x512x64xbf16, #layout4>) -> tensor<1x32xbf16, #layout5> { 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           42:  %0 = "ttnn.get_device"() <{mesh_shape = #ttnn<mesh_shape 1x1>}> : () -> !tt.device<#device> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           43:  %1 = "ttnn.to_layout"(%arg0, %0) <{layout = #ttnn.layout<tile>}> : (tensor<1x32x512x64xbf16, #layout4>, !tt.device<#device>) -> tensor<1x32x512x64xbf16, #layout6> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           44:  %2 = "ttnn.to_device"(%1, %0) <{memory_config = #ttnn.memory_config<<interleaved>, <dram>>}> : (tensor<1x32x512x64xbf16, #layout6>, !tt.device<#device>) -> tensor<1x32x512x64xbf16, #layout6> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           45:  %3 = "ttnn.empty"(%0) <{dtype = #tt.supportedDataTypes<bf16>, layout = #ttnn.layout<row_major>, memory_config = #ttnn.memory_config<<interleaved>, <dram>>, shape = #ttnn.shape<1x32>}> : (!tt.device<#device>) -> tensor<1x32xbf16, #layout7> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
check:25'1      ?                                                                                                                                                                                                                                               possible intended match
           46:  %4 = "ttnn.mean"(%2, %3) <{dim_arg = [-1 : i32, -2 : i32], keep_dim = true}> : (tensor<1x32x512x64xbf16, #layout6>, tensor<1x32xbf16, #layout7>) -> tensor<1x32xbf16, #layout7> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           47:  %5 = "ttnn.to_memory_config"(%4, %0) : (tensor<1x32xbf16, #layout7>, !tt.device<#device>) -> tensor<1x32xbf16, #layout5> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           48:  return %5 : tensor<1x32xbf16, #layout5> 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           49:  } 
check:25'0     ~~~
           50:  func.func @max(%arg0: tensor<1x1x512x64xbf16, #layout>) -> tensor<1x1x512xbf16, #layout1> { 
check:25'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            .
            .
            .
>>>>>>

--