-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core ATen Opset] Lower aten_full_like #5866
Comments
@wonjoolee95 I lowered this in #5781 but it appears this new unit test for the op is failing, so I can take a look at this issue. I made some progress on this by simply updating the Now the arguments match those defined in the function signature, but there is a new error which is: I didn't know what this error meant, but I found some docs on Fake Tensors and Fake Tensor Modes here which seem related and read those, and think I understand why we use fake tensors for the exporting process now. I then printed some metadata about each node in
This comment indicates the I also thought dynamo was for JIT compilation, and the term "export" seems to imply AOT compilation, so I'm confused about dynamo's role in this process. Have you seen this error before, or have any insight into what could cause it? Also just to check my understanding on the e2e export process: Dynamo doing the tracing using the CPython frame evaluation API to produce a FX graph of tensor operations, which is sent to the XLA bridge. The XLA bridge is then supposed to returns a graph of corresponding XLA operations. This graph of XLA ops can then be converted to Stable HLO and fed into the XLA compiler, which produces the actual machine code that can be executed on the target device. Is this correct? |
Hi Daniel, For some reason i dont see the test
So export actually uses dynamo. It calls dynamo and asserts no-graph-break. So if the JIT happens to JIT everything then that is equivalent to AOT compilation. |
Hi, it seems like this issue may be mistakenly referring to the wrong aten op because it was using an older file -- https://raw.githubusercontent.com/pytorch/xla/5e63756c3438e0d25e32ba5dceac68d82d23993a/test/test_core_aten_ops.py. This should really be Thanks! |
This is already passing, closing. |
In order for PyTorch/XLA to support the PyTorch core ATen opset, it requires lowering each core ATen op in PyTorch/XLA. This issue is used to track the PyTorch/XLA lowering for aten_full_like.
Here are some general guidelines to lowering this op:
@unittest.skip
or@unittest.expectFailure
and run the unit test at test_core_aten_ops.py. Eg:pytest test/test_core_aten_ops.py -k test_aten_full_0
For any questions, feel free to leave a comment in this PR.
The text was updated successfully, but these errors were encountered: