-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expected a 'cuda' device type for generator (related to speed issues?) #22
Comments
Hi @polo5 , Thanks for your interest in our paper! Yes, MDEQ-Large does take a few hours to finish all epochs, so your calculation is correct. However, I should note that most (about 80%?) of the time was actually spent on boosting accuracy from 90% to ~93.5%. If you are just looking for a 90% accuracy, an MDEQ-Large should achieve that within 1.5-2 hours. If you use an MDEQ-Tiny model and set |
Also re: error: I haven't encountered this error for this repo before, but I'll check for sure for PyTorch 1.10! |
I got this error as well when I use PyTorch 1.10. After changing to PyTorch 1.8.1, everything is fine. You can take a look at this issue. Seems related to a bug in pytorch 1.10. |
Thanks @liu-jc. This is an important issue then, since pytorch 1.10 is the recommended version for this repo. The other issue is that previous pytorch versions (<1.10) do work on Ubuntu but somehow don't work on Windows for this repo (I get some strange error-less interruption which looks like a segmentation fault). Oh well, if setting devices manually hasn't slowed down the code I'm happy with that solution. |
The issue with PyTorch <1.10 is that the hook implementation currently used to implement O(1) memory (see e.g., https://github.com/locuslab/deq/blob/master/DEQ-Sequence/models/deq_transformer.py#L380) did not work before, and was a bug that PyTorch only recently fixed in 1.10 (see this issue). I will check this again recently and update on this thread. I've never tried on Windows environment but I suspect it has something to do with the WSL? |
Heya, thanks for the great paper(s) :)
Initially I've had to fix a few things to make your code run, but now I find it very slow and I'm wondering if I broke anything.
The
cls_mdeq_LARGE_reg.yaml
experiment runs at 130 samples/s post pretraining on a GTX 2080, which means that it takes hours to reach ~90% test acc (while a WideResNet will take 10min for that perf).The main error I had to fix was this:
which according to this issue seems to be caused by this line in your code:
torch.set_default_tensor_type('torch.cuda.FloatTensor')
which I removed. After setting all the needed things on .cuda() manually I get the performance mentionned above. Is this normal or did I break something? Thanks!Specs
Pytorch 1.10
Windows (RTX3070) and ubuntu 20 (GTX 2080) both tried
The text was updated successfully, but these errors were encountered: