Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create engine from onnx model #26

Closed
xa1on opened this issue Apr 3, 2024 · 40 comments
Closed

Unable to create engine from onnx model #26

xa1on opened this issue Apr 3, 2024 · 40 comments

Comments

@xa1on
Copy link
Contributor

xa1on commented Apr 3, 2024

I've prepared the baseline onnx model, however, whenever I try and run depth-anything-tensorrt.exe, it outputs:

Loading model from models/depth_anything_vitb14.onnx...

and then, the program exits without any errors at all.

This is the command I'm using:

build/Release/depth-anything-tensorrt.exe models/depth_anything_vitb14.onnx video/davis_dolphins.mp4

I'm assuming that I am exporting the onnx model properly, however, just in case, I'll upload a log. Here's the command I used

python export.py --encoder vitb --load_from depth_anything_vitb14.pth --image_shape 3 518 518

onnxmodelexportlog.txt

Anyone have any idea why this isn't working for me?
Thanks.

CUDA: 11.6
TensortRT: 8.6.1.6
Windows: 11

@spacewalk01
Copy link
Owner

Did you put opencv_worldxxx.dll file in build/Release folder?

@spacewalk01
Copy link
Owner

image

@spacewalk01
Copy link
Owner

Also add video io dll files
image

@spacewalk01
Copy link
Owner

Let me know if it works. Please give a star if you like the project and my help.

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

I've already starred the project, however, even with the DLL files, the same issue is still present. I'm still not getting any errors and no engine file is being built.

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

Let me know if it works. Please give a star if you like the project and my help.

image

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

I've also been trying to build the engine file with trtexec with an older version of this repo, however, I'm still running into issues.
I'm following this guide, however, I keep running into a "network creation failed" error in trtexec.
My issue is outlined here with all the relevant files

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

Let me know if it works. Please give a star if you like the project and my help.

thank you for taking the time to read my issue, your work looks amazing, however, I hope you're still able to help me with this :D

@spacewalk01
Copy link
Owner

did you modify CMakeLists.txt to set your opencv and tensorrt paths?
image

@spacewalk01
Copy link
Owner

spacewalk01 commented Apr 9, 2024

Please just use the most recent version and let's try to fix your issue.

@spacewalk01
Copy link
Owner

I checked your onnxmodelexportlog.txt file. It seems your engine is correctly built.

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

did you modify CMakeLists.txt to set your opencv and tensorrt paths? image

I have modified that file to the correct paths:
image

@spacewalk01
Copy link
Owner

That is very strange. It should show an error if there is a bug or problem in the code. If it doesn't show it may be system error

@spacewalk01
Copy link
Owner

Or premission. you tensorrt and opencv are in c folder will you try it as admin?

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

I'm sorry I can't provide any further information, but yeah, it still doesn't work.
image

@spacewalk01
Copy link
Owner

What is your gpu?

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

I have a mobile NVIDIA GTX 1060 3gb vram
image

@spacewalk01
Copy link
Owner

I used your onnx model you put here and created the engine successfully. I think there is a cuda version problem in your system. maybe your cuda 11.6 version is not suitable for your GPU. I will look it up for you.
image

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

I have been able to successfully run the onnx model with cuda acceleration from this repo.
I don't think that compatibility with the cuda version is the issue. It could be possible that my gpu just does not support tensorrt

@spacewalk01
Copy link
Owner

I found out that Compute capability (version) for your GPU (nvidia gtx 1060) is 6.1

@spacewalk01
Copy link
Owner

spacewalk01 commented Apr 9, 2024

Also, I checked that compute capability 6.1 pascal arch is compitable with CUDA 8.0
image

@spacewalk01
Copy link
Owner

Please install cuda 8.0. Also don't forget to install cudnn version for cuda8.0

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

image

nvidia-smi states that my current cuda version is 12.4 which should allow me to use cuda version 11.6.
I have been able to use version 11.6 with this repo too.

Let me try it out though.

@spacewalk01
Copy link
Owner

Then check your onnx cuda version what version it is installed with?

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

could you explain what you mean by onnx cuda version? I was able to use version 11.6 with this repo.

@spacewalk01
Copy link
Owner

Mine says I am using cuda12.0 but in fact I installed 11.8. It doesn't show cuda toolkit version
image

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

yes, but it should indicate that my current driver is using cuda version 12.4. I have installed the latest Nvidia driver for my gpu.

@spacewalk01
Copy link
Owner

You used pip install onnxruntime-gpu command to install onnx gpu right? use conda list to check onnxruntime-gpu version. Then you can check here which onnxruntime version is installed with which cuda version. https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html it automatically install cuda toolkit

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

image
image
image
I should have version 11.6 installed properly, I will check with onnxruntime-gpu in a second

@spacewalk01
Copy link
Owner

spacewalk01 commented Apr 9, 2024

for my case it automatically install onnxruntime version that suitable for my gpu which is suitable for cuda 11.8
image

@spacewalk01
Copy link
Owner

onnxruntime is different it will use anaconda environment to install cuda. I suggest you to install cuda 8.0 and do the steps again.

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

I will try that, but again, I have been able to use my version of the onnxruntime with my current version of cuda with cuda acceleration just fine in the past with the exact same model through this repo.

@spacewalk01
Copy link
Owner

Hmm sorry that is what I can suggest from my observation. I can't guess without looking at it

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

I am looking at the onnx website you sent me and it seems to be suggesting that only cuda versions 10 and up are supported.

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

I'm kind of confused because I've been able to use the onnx runtime with cuda enabled before

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

Hmm sorry that is what I can suggest from my observation. I can't guess without looking at it

I've completely redownloaded my cuda and tensorrt. Now I'm getting an error stating:

onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
4: [network.cpp::nvinfer1::Network::validate::3163] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)```

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

I've figured it out. It was either my cuda cudnn or tensorrt installation that was the problem. I reinstalled everything and it works now.

Thanks for all your help!

@xa1on xa1on closed this as completed Apr 9, 2024
@spacewalk01
Copy link
Owner

It works on 11.6 cuda version with nvidia gtx 1060 ?

@xa1on
Copy link
Contributor Author

xa1on commented Apr 9, 2024

It works on 11.6 cuda version with nvidia gtx 1060 ?

I reinstalled everything and used cuda 11.8 for my gpu. I believe the problem was the cudnn installation.

@spacewalk01
Copy link
Owner

I see.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants