-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#9709: Add optional transpose_a and transpose_b to ttnn matmul and linear #9836
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
TT-BrianLiu
requested review from
eyonland,
arakhmati,
cfjchu,
xanderchin and
ayerofieiev-tt
as code owners
June 28, 2024 21:55
ayerofieiev-tt
approved these changes
Jun 28, 2024
Closed
3 tasks
3 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Ticket
Link to Github Issue
Problem description
In GGML, inputs a and b to
matmul
may be "pre-transposed", so to perform matmul properly we need a corresponding transpose a or b.What's changed
Easiest solution is to add an optional transpose op before running matmul. I added
transpose_a
andtranspose_b
flags to specify whether to run the transpose on the inputs or not. This should work for most interleaved cases. As a side note, I also had to fixtranspose
to respect padding when swapping dims duringcompute_output_shapes
.There are two concerns that can be addressed down the line:
matmul
to what's supported by ourtranspose
op. We can either uplifttranspose
to support whatmatmul
supports, or if we did the transposes internally inmatmul
this problem will automatically go away.Checklist