-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#13643: Extend binary-ng math support to match all primitive binary ops #16276
#13643: Extend binary-ng math support to match all primitive binary ops #16276
Conversation
( | ||
(torch.Size([1, 1, 31, 32]), torch.Size([5, 3, 32, 32])), | ||
(torch.Size([5, 2, 64, 1]), torch.Size([1, 3, 1, 128])), | ||
(torch.Size([5, 1, 1, 64]), torch.Size([2, 3, 128, 1])), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If possible, lets please also check with 2D, 3D, 5D shapes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To my understanding, 4D tensors are currently expected as inputs for the purposes of broadcasting. Each dimension is independently handled for broadcasting logic as shown with the test-cases for test_binary_scalar_ops
. We can view 2D, 3D, or 5D shapes as 4D without allocating extra memory, so that should suffice for 2D and 3D. If broadcasting along more than 4 dimensions is necessary, that can be implemented as an extension, but not as part of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check out traces coming from PyTorch.
https://github.com/tenstorrent/pytorch2.0_ttnn/blob/main/docs/operations/aten.add.Tensor.md
It is possible to receive tensors of different ranks, like rank_a 3 and rank_b 1. The expectation is that it works and users does not have to unsqueeze to 4D manually.
If it does not work in this PR and requires more work, I agree that’s a matter for a different PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should work correctly without an unsqeeze. If we find that this is not the case it should be fixed asap on the next PR. The only edge case we don't support is the 5D tensors. If generality requirements dictate we need this it should be prioritized. My current thought is that the 5D support is lower priority than sharding in the context of broadcasting which is what I want us to focus on as soon as possible.
ttnn/cpp/ttnn/operations/eltwise/binary_ng/binary_ng_pybind.hpp
Outdated
Show resolved
Hide resolved
ttnn/cpp/ttnn/operations/eltwise/binary_ng/device/binary_ng_program_factory.cpp
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please double check if BinaryNgDeviceOperation::compute_program_hash
is still up to date after this change?
Not a scope of this PR, but ideally would be good to have a test which verifies that difference parameter cause a new program vs a reuse.
I have confirmed it is still up to date after this change. |
06e57c1
to
5a5126d
Compare
Ticket
#13643
Problem description
Re-opening #16068 with a Grayskull test disabled because of possible accuracy issues
What's changed
This PR adds support for all other primitive binary ops that are defined in the
BinaryOpType
enum.Checklist