Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installing locally and following instructions exactly with virtual envirnoment #28

Open
anumerico opened this issue Oct 8, 2023 · 4 comments

Comments

@anumerico
Copy link

Still getting errors, thats the current one:

RuntimeError: TemporalEncoderDecoder: TemporalViTEncoder: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

@CarlosGomes98
Copy link
Collaborator

Indeed, for running on CPU I merged a new PR that should fix this. You should be able to pull again from main and this should be fixed.

@anumerico
Copy link
Author

anumerico commented Oct 8, 2023

Im installing NVIDIA drivers, CUDA and so on to see if it fixes the issue..
As far as I have gotten:

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

@anumerico
Copy link
Author

anumerico commented Oct 8, 2023

Cool I saw what you did @CarlosGomes98 , I did git pull and hopefully it works with non CUDA installations, now the issue is I managed to get CUDA but is an old GPU that is not supported:
Found GPU0 NVIDIA GeForce **** which is of cuda capability 3.5.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 3.7.
How can I disable GPU to test your new CPU only code?

@anumerico
Copy link
Author

anumerico commented Oct 8, 2023

That sorted the issue:
https://stackoverflow.com/questions/53266350/how-to-tell-pytorch-to-not-use-the-gpu

Just tested your new code for CPU only cases, it works.
And if the case is deprecated GPU or not supported just run this command in the shell before running the training command:

export CUDA_VISIBLE_DEVICES=""

@anumerico anumerico changed the title Installing locall and following instructions exactly with virtual envirnoment Installing locally and following instructions exactly with virtual envirnoment Oct 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants