-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add conversion pass for Arith ConstantOp #953
Conversation
Can you cut and paste an example in the commit message / as a comment on this PR? So torch-mlir is mixing arith and stablehlo together? I suppose xla doesn't have any cases like this so far. |
Yes so seems torch-mlir does covert the torch Dialect to StableHLO, but any ops of native MLIR Dialects e.g. I referenced the issue but tt-torch is a private repo so not everyone can see it, posting the example here for reference:
|
0943350
to
8d2f32a
Compare
I'm wondering if we should support arith -> ttir conversion as part of the stablehlo path and just go straight from arith.constant to ttir.constant. My thinking is that presumably linalg/tosa might emit the same thing and they'd also need arith conversion. Can you try emitting those dialects from torch-mlir to see what the resulting IR is? |
The reason I chose to first convert from There is an open PR with more changes here:
Ok I will try a few examples. |
Ok I ran a few examples and I found that:
|
I had a slight preference in converting straight to ttir, but since torch-mlir only emits arith.constant (and no other ops) and since our constant op is so complicated as is, I think we'll have fewer issues overall if we simply do a thin conversion from airth to stablehlo/linang and continue with the default path as @uazizTT did in the pr. |
Sounds good, let's move forward with this approach. We can just duplicate it for linalg path, it's pretty contained if it's just |
3f40604
to
030040d
Compare
Add an optional pass to convert
arith.constant
tostablehlo.constant
This is needed to run examples in tt-torch until fixed in upstream.
https://github.com/AleksKnezevic/tt-torch/issues/1