-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export dynamic batch size ONNX using ONNX's DeformConv #167
base: main
Are you sure you want to change the base?
Conversation
Thanks a lot! I'll take time to look at it tomorrow, which really helps. |
Sure, I've updated the notebook to reduce the modifications. |
Thank you so much, @itskyf, for your contribution! Have you had a chance to test whether the execution works with ONNX Runtime? |
Hi, @itskyf. Did you successfully export the ONNX model? I tried it but met this problem. I tried both `PyTorch==2.0.1+onnxruntime-gpu==1.18.1` and `PyTorch==2.5.1+onnxruntime-gpu==1.20.1`). |
Hi @ZhengPeng7, I believe the issue arises because ONNX has implemented the |
@ZhengPeng7 ah, I forgot to mention that we need to also update the onnx package for opset 19. |
@itskyf Could you please provide the code in which you have converted the dynamic batched model to TensorRT? Thanks in advance! |
Thanks for @itskyf 's PR. This is exactly what I tested, and it worked. I have a question about this PR for @itskyf When I tested in this way, I found the result trt engine will work as expected when the batch size used when generating is different from batch size used for inferencing. And I figured out #166 this change should be made. Do you find the same issue ? |
This PR replaces the usage of deform_conv2d_onnx_exporter with the native DeformConv operator available in ONNX opset 19. The exported ONNX model now supports dynamic batch sizes.
Notes
symbolic_deform_conv_19()
function was generated using OpenAI o1. It works in my testing, but let me know if there are any special requirements to consider.