You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OOM often occurs when converting models with very large model structures due to the large number of Numpy Arrays deployed in RAM during automatic correction for accuracy errors.
Issue Type
Others
OS
Linux
onnx2tf version number
1.25.7
onnx version number
1.16.1
onnxruntime version number
1.18.1
onnxsim (onnx_simplifier) version number
0.4.33
tensorflow version number
2.17.0
Download URL for ONNX
https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/onnx/unet
Parameter Replacement JSON
NA
Description
OOM often occurs when converting models with very large model structures due to the large number of Numpy Arrays deployed in RAM during automatic correction for accuracy errors.
Ref
https://zenn.dev/kazuhito/articles/800e1176270c43
[TODO] Switch to a mode that performs inference in slow delay shape estimation mode during very large size transformations such as diffusion models #430
https://github.com/Kazuhito00/BiRefNet-ONNX-Sample/releases/download/v0.0.1/birefnet_1024x1024.onnx
The text was updated successfully, but these errors were encountered: