-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support non-traceable Custom Ops with opaque arguments #7330
Comments
Currently I don't think you can register a custom op to torch with types that are not defined in native_functions.yml. Which, they do have Also, I am curious why not just use int tensors to hold bytes as you have shown it the example above. That should already works. |
Hi, @qihqi, is that ok to assign this ticket to you? |
We would like to expose already testesd and optimized implementations in C, that do not necessarily take tensors. We coud definitely use a "Tensor" object to hold opaque / random bytes and reinterpret cast in the implementation (and that is some of the idea), but to know if suche a tensor holds an actual tensor or an opaque string, it needs to be annotated somehow when we export from torch to HLO. The annotations can then be used to lower the custom call arguments accordingly to opaque pointers. We could maybe use #7046 to introduce annotation, but i was thinking maybe we could find a more generic solution. |
🚀 Feature
torch_xla.stablehlo supports exporting custom op to stablehlo custom call for tensors arguments.
We would like to be able to export custom ops taking arbitrary opaque string as argument to stable hlo.
Motivation
Some custom operations come from C external sources and are used through pybindings during inference.
Those operations sometimes take POD structures that are not necessarily tensors as argument, a little bit like the opaque Descriptor example in the Jax custom op tutorial.
Such operations can be used at any point in the model, they usually are ([opaque structs]) -> tensors, or (tensors) -> [opaque struct], but we could imagine an op in the middle of a model having side effect to an opaque external structure.
Pitch
Here is the example pytorch codes and what the HLO could potentially look like.
The idea is to be able to declare some arguments as "external" for the export to have them in the upper function and annotate them with some attributes, which would be used downstream to lower to some opaque pointers and sizes.
My example is based of #7017
The text was updated successfully, but these errors were encountered: