Replies: 2 comments
-
@sizhky it's not something I'll be adding to the current master. It's incredibly slow to the point of being unusable for most of the models in timm. However, it is supported (just pushed a small fix for it) on the https://github.com/rwightman/pytorch-image-models/tree/bits_and_tpu/timm/bits#readme On that branch you can use --force-cpu to make validate or train use the CPU (in either PyTorch or PyTorch XLA). It'll fallback to CPU if there is no XLA or CUDA accelerator. I'd recommend looking into PyTorch XLA if you want to use CPU, it's quite a bit faster. Steeper learning curve than standard PyTorch though in terms of setting it up and using it (best to use docker container). |
Beta Was this translation helpful? Give feedback.
-
I tried changing few lines in the train.py to make it work on the CPU. However it is slow as suggested. The changes are simple just make sure you add the following line in the train_one-epoch; the if statement is missing.
|
Beta Was this translation helpful? Give feedback.
-
Is there a feature to train only using the CPU?
I try and test a lot of experiments on my local machine and only when it works on a small batch, I will port the code to a GPU machine. For this, I require a cpu only code to train.
Beta Was this translation helpful? Give feedback.
All reactions