-
Notifications
You must be signed in to change notification settings - Fork 489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lower AtenFull op #5781
Lower AtenFull op #5781
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this, @danielvegamyhre!
- Do we want to fallback to the pytorch/aten CPU version of this op if the
layout
and/orpin_memory
values are not the default? I saw this pattern in other places in inaten_xla_type.cpp
, and thetensor_methods::full
function in XLA doesn't support these parameters, so it seemed like the right thing to do, but I'm not sure.
Yes, this is the correct behavior. We can fallback to CPU for now for full
like you did in this PR.
- I notice
full
was not listed inxla_native_functions.yaml
(as expected) but there IS an existing test for it intest_aten_xla_tensor_1.cpp
calledTestFull
, which seems to verify that we usexla::empty
followed byxla::fill_
to create a tensor with the fill value. So I just changed this test to check that we are instead using the newfull
method implemented in this PR. I confirmed the test is passing with these changes, and failing without the changes. Is this the correct thing to do here?
Good find -- so most likely what's happening here is that when full
is not lowered, it's decomposed empty
and fill
. But since this PR lowers full
, what you did here -- removing the counter checks for empty
and fill
-- is correct, thanks!
LGTM.
Awesome! Thank you @danielvegamyhre |
* lower full * update test for full op * formatting
* lower full * update test for full op * formatting
* lower full * update test for full op * formatting
* lower full * update test for full op * formatting
* lower full * update test for full op * formatting
* lower full * update test for full op * formatting
Fixes #5780
This is my first attempt at op lowering, and I have a couple questions:
Do we want to fallback to the pytorch/aten CPU version of this op if the
layout
and/orpin_memory
values are not the default? I saw this pattern in other places in inaten_xla_type.cpp
, and thetensor_methods::full
function in XLA doesn't support these parameters, so it seemed like the right thing to do, but I'm not sure.I notice
full
was not listed inxla_native_functions.yaml
(as expected) but there IS an existing test for it intest_aten_xla_tensor_1.cpp
calledTestFull
, which seems to verify that we usexla::empty
followed byxla::fill_
to create a tensor with the fill value. So I just changed this test to check that we are instead using the newfull
method implemented in this PR. I confirmed the test is passing with these changes, and failing without the changes. Is this the correct thing to do here?