-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Pallas] Support Dynamo #6477
[Pallas] Support Dynamo #6477
Conversation
@bdhirsh Hi Brian, I have difficulties on registering custom ops with functionalization enabled. Here is the error log, do you have any insights? Maybe the aten schema should looks something different?
|
@alanwaketan - you have a custom op that mutates some of its inputs, and recently @zou3519 added an "auto-functionalize" higher-order-op that tries to automatically functionalize mutable custom ops. I'm not sure what's causing that error. Although if you're worried about trace-time, you might be a bit better off with a hand-written C++ functionalization kernel (similar to the cod-generated ones we have for ATen). You can find some examples to base it off of if you build pytorch locally, and inspect some of the kernels in |
Thanks, Brian. Will looks into this. On the other hand, I guess I can also change the semantics of my custom op to not in-place. Then all the problems should go away? |
bc6b14c
to
ebccfaa
Compare
I will land this as it is and do a follow up to make the tpu_custom_call_ functional. Thanks @qihqi for the approval. |
Summary:
This pull request enables dynamo support for custom tpu calls, e.g. ones written in Pallas.
Test Plan:
PJRT_DEVICE=TPU XLA_DISABLE_FUNCTIONALIZATION=1 python test/test_operations.py -v -k test_tpu_custom_call_pallas_add_one_dynamo