Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Optimizer/TTNN] Migrating Optimizer to TTNN with minimal possible fu… #944

Merged
merged 1 commit into from
Oct 22, 2024

Conversation

nobradovictt
Copy link
Contributor

Migrating Optimizer to TTNN dialect. I tried to minimize functional changes but due to way how currently ToLayoutOp is handled in TTNN lowering and difference between layout attr in TTIR and TTNN dialect special handling was needed for memory management ops. Resharding is broken and will need to be rewritten, but based on inspecting old code it wouldn't have worked in all cases as well. These should be addressed in followup change in sync with changes to TTNN lowering and layout attr.
All tests including Optimizer ones in passing state.
Pure Optimizer pass test has been updated to start from TTNN IR.

Closes #910

@jnie-TT
Copy link
Contributor

jnie-TT commented Oct 21, 2024

FYI @nobradovictt, this PR (#840) might be useful for the layout lowering and the current hacks we have in place.

return true;
}
if (llvm::isa<ToLayoutOp>(op)) {

if (llvm::isa<ToLayoutOp>(op) || llvm::isa<ToMemoryConfigOp>(op) ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should consider generating layouts for these ops as well in the future. Not in this PR :).


moduleOp->walk([&](Operation *op) {
if (op->getNumResults() == 0) {
return;
}

if (!isa<RankedTensorType>(op->getResult(0).getType())) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ttnn.get_device

@nobradovictt
Copy link
Contributor Author

FYI @nobradovictt, this PR (#840) might be useful for the layout lowering and the current hacks we have in place.

Thanks, Ill take a look, I need to make a followup change after this one regarding layout change handling in optimizer.

@nobradovictt nobradovictt force-pushed the nobradovic/ttnn_optimizer branch from 94f5a2a to fb541f8 Compare October 22, 2024 07:59
@nobradovictt nobradovictt merged commit d6d6aea into main Oct 22, 2024
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Migrate Optimizer to TTNN dialect
5 participants