-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#0: Port Matmul, Conv, and AllGather to compute_ouput_specs #15978
Conversation
return {create_device_tensor(output_shape, output_dtype, Layout::ROW_MAJOR, input_tensor.device(), out_mem_config)}; | ||
return {TensorSpec( | ||
output_shape.logical_shape(), | ||
TensorLayout::fromLegacyPaddedShape( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ouch.. means we will have to make another pass to get rid of this constructor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, getting rid of fromLegacyPaddedShape
is not trivial and will require help from op writers. It's a part of the planned refactoring to remove all Shape
/LegacyShape
from ops
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved with minor comment
} // namespace detail | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reason for moving it out of detail
namespae?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved default_create_output_tensors
out of detail namespace, so I can use it in matmul.
auto output_tensor_shape = ttnn::Shape(tt::tt_metal::LegacyShape({1, 1, padded_shape_w, padded_shape_c}, output_padding)); | ||
return {output_tensor_shape.value}; | ||
} | ||
auto output_shape = tt::tt_metal::LegacyShape({1, 1, padded_shape_w, padded_shape_c}, output_padding); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removal of the use of LegacyShape is on the OP owners?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Removal of Shape/LegacyShape within each individual OP is on the OP's owners. It may involve modifying the operation to avoid unnecessary paddings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
conv2d anf halo updates look good.
Ticket
Problem description
We're continuing to port ops from compute_output_shapes to the new compute_output_specs
What's changed
Ported matmul, all_gather, all_gather_matmul, conv2d, and halo ops to use compute_output_specs
Checklist