-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#0: Fix failing test case for width sharded non-32 multiple output width #16224
base: main
Are you sure you want to change the base?
Conversation
8db5c45
to
41070e7
Compare
ttnn/cpp/ttnn/operations/conv/conv2d/prepare_conv2d_weights.cpp
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a regression UT for this?
It would be nice to remove this test case from |
41070e7
to
1007506
Compare
@pavlejosipovic I've added a UT. It exposed some more bugs, which I have also fixed. I removed the test from the sweep. |
83628b8
to
3f80dad
Compare
@@ -530,20 +530,18 @@ def test_conv_features_multi_device( | |||
@pytest.mark.parametrize( | |||
"batch_size, output_channels, input_channels, input_height, input_width, filter_height, filter_width, pad_h, pad_w, act_block_w_div", | |||
( | |||
(2, 128, 128, 9, 9, 3, 3, 0, 0, 1), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason why where few test cases removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They were unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test coverage for width sharding is miniscule as is, but if you think they are not needed ok
@@ -191,6 +192,7 @@ Result conv2d( | |||
|
|||
if (bypass_halo) { | |||
if (input_tensor_post_tm.layout() == Layout::TILE) { | |||
input_tensor_post_tm = ttnn::reshape(input_tensor_post_tm, input_tensor_post_tm.get_padded_shape()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need this reshape here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to_layout was not using the padded shape, and was instead using the logical shape. This was causing an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at least we should log an issue with unit tests and reference it here.
Would be better if we can just fix to layout.
Halo with untialize is doing this properly?
Problem description
tests/sweep_framework/sweeps/conv2d/short/conv2d_short_sweep.py::test_conv2d_localrun_fail_only[device_params0-input_spec7]
test case fails.What's changed
Fixed width sharded weights preparation, where out_channels % 32 != 0
Checklist