-
Notifications
You must be signed in to change notification settings - Fork 8
About Version-5 reshape #13
Comments
In Version 1, I've thought a little about how to implement this: since a PlaidML function operates over symbolic-sized tensors anyway (output shapes aren't known until all input shapes have been bound), it might be possible to simply defer reshape computations. Doing this will be tricky; the program compilation cache is currently keyed off tensor sizes, not tensor contents. I think it's doable, but it'll take some time, and it'll have to be scheduled relative to other features we'd like to add. In the meantime, if anyone feels like working on this, I'd be very happy to review PRs. :-) |
@earhart I have solved the problem. _ctx = plaidml.Context() def get_value(x): You can use get_value to get the value of shape, and it works on my model. |
Can you give an example of how to use this? |
The only problem with this approach is that we still don't have the input tensor data to use for computing the reshape() until after the inputs are bound, after the tensor's been prepared (i.e. compiled). One other idea might be to defer the compilation: if we figure out that there's an operation whose Tile code depends on a tensor input, we could hold off on doing all the binding until we know the value of that input, and use caching to keep performance reasonable. I think we'd also want to add a check to see which tensors are actually used by the resulting program; if a tensor's only being provided to determine a shape, there's no reason to be transferring its data to the GPU. |
I run onnx-plaidml on my model and got an error:
“Version-5 reshape() is not yet implemented by the PlaidML ONNX backend”
I find it caused by the function pixelshuffle I used. Do you have an incomplete implement of this operation?
The text was updated successfully, but these errors were encountered: