-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DINO inference on a CPU only machine fails #157
Comments
Same problem when running on MacBook that has no GPU.
|
Please check if you have set cuda runtime during installation, the And the CPU inference problem will be checked later~ |
Thanks for your reply and yes, I totally understand the cuda runtime problem. I have trained a DINO model on a custom dataset and now trying to run it on a CPU-only machine. So now stuck on this error as there is no GPU available so cannot install CUDA Toolkit and was wondering if there is a way around it? I guess this is not possible currently? |
Ok, I've totally understand your problem, we will check this problem later |
Thank you for releasing such a nice project. I have the same problem listed here and I understand that the only way to get around it at this point is to have CUDA runtime installed while building the project. This however, makes this project really hard to deploy where only CPU instances are available. To get around this, I tried to build a docker container with cuda runtime available on a GPU supported machine and then deploy it. However, this leads to a massive sized docker container due to the CUDA binaries and is hard to deploy in our setting. I see that you have listed a python package and docker as high priorities, but I would like to suggest to make this bug a higher priority so that the python package/docker can also be seamlessly used in CPU only environments rather than being needlessly dependent on the CUDA binaries. Thanks! |
I am also running in the issue pointed out by @var316 and agree with @nolancardozo13 that it will really help in having this bug solved before the python package and docker. |
Just delete the following lines, and set
|
Hi @powermano , Thanks! I just verified that this indeed works. Do you believe that this has any implications on the model output during inference? |
I trained DINO with my own dataset and the results are correct. But i do not compare the cpu and gpu results. I will test it. |
hey @powermano, do you mind sharing where did you comment it out? what file & lines?
|
comment the file: https://github.com/IDEA-Research/detrex/blob/main/detrex/layers/multi_scale_deform_attn.py#:~:text=return%20_dummy-,try%3A,),-Give%20feedback Then taking DN-DETR as example, set `train.device = "cpu" |
I've also tested all the eva-02-vitdet-dino models with this fix and they don't seem to be affected. @rentainhe should this be merged to main? We can keep the try:
from detrex import _C
except ImportError:
pass |
Hi,
Always end up with this common error
Cannot import detrex._C', therefore 'MultiScaleDeformableAttention' is not available.
Inference script with
train.device
andmodel.device = "cpu"
works but it still requires a Cuda dependent/enabled machine.Is there a way to bypass this? Is there a way to deploy/run DINO on a CPU only machine/docker?
Many thanks!
The text was updated successfully, but these errors were encountered: