-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Optimizer] Fix ttnn.ToLayout mem config optimizer override. #1130
Conversation
7ea5154
to
473c16c
Compare
Just out of curiosity what's the current discrepancy of layout and memconfig in ttnn.ToLayoutOp prior to decomposition? |
In Optimizer we use op output tensor layout to specify memory related attributes of the op. Now after moving to TTNN dialect it seems we need to set same in MemoryConfigAttr as well so situations like this don't happen:
Notice diff between memory config and output layout. It will compile without errors and just ignore layout5 attr in runtime. |
I see that makes sense. Yeah in TTNN backend I think there was a discussion that while we want to maintain all the information in the tensor, we also want to pass relevant parameters into the op as well to make it more explicit, so there will be some work needed to maintain consistency there. This also popped up when I was looking at the forcing row_major/tiled issue, where the layout in the op parameter is tiled but it was not reflected in the tensor (the tensor layout was still row major). Thanks for the explanation! |
Temp fix for discrepancy between MemoryConfigAttr and layout attribute of output tensor of generic version of ttnn.ToLayoutOp.
Remove op input overrides not needed anymore from Mnist sharding POC.