You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I get the following error while running pretrain.py. I'm running this code on 16CPUs , no GPU. How do I solve this error?
Traceback (most recent call last):
File "pretrain.py", line 70, in
loss.backward()
File "/home/naman_churiwala_quantiphi_com/anaconda3/envs/ChemGAN_1/lib/python2.7/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/naman_churiwala_quantiphi_com/anaconda3/envs/ChemGAN_1/lib/python2.7/site-packages/torch/autograd/init.py", line 84, in backward
grad_tensors = _make_grads(tensors, grad_tensors)
File "/home/naman_churiwala_quantiphi_com/anaconda3/envs/ChemGAN_1/lib/python2.7/site-packages/torch/autograd/init.py", line 28, in _make_grads
raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs
The text was updated successfully, but these errors were encountered:
I don't think even that works since the problem is stereo function in jtnn_vae returning a null tensor. This can be solved by a simple change described below:
**In jtnn_vae.py replace:
if len(labels) == 0: return create_var(torch.Tensor(0)), 1.0
with
if len(labels) == 0: return create_var(torch.Tensor([0])), 1.0**
I think the problem is fixed, but maybe in the future, it would be helpful.
I have faced a similar problem; it occurs when your GT labels are not the same as the class you want to predict in the current iteration. In my case, I was working in semantic segmentation with 8 classes [0,1,2,3,4,5,6,7], and class zero [0] was encoded as ignore class. Thus, when the GT labels are only class [0], then I get "loss" is 'empty' i.e.: tensor([], device='cuda:0', grad_fn=) grad can be implicitly created only for scalar outputs".
A simple fix would be to skip the iteration without doing and Forward and Backward pass when the is no class in GT labels that you are trying to predict with the model.
I get the following error while running pretrain.py. I'm running this code on 16CPUs , no GPU. How do I solve this error?
Traceback (most recent call last):
File "pretrain.py", line 70, in
loss.backward()
File "/home/naman_churiwala_quantiphi_com/anaconda3/envs/ChemGAN_1/lib/python2.7/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/naman_churiwala_quantiphi_com/anaconda3/envs/ChemGAN_1/lib/python2.7/site-packages/torch/autograd/init.py", line 84, in backward
grad_tensors = _make_grads(tensors, grad_tensors)
File "/home/naman_churiwala_quantiphi_com/anaconda3/envs/ChemGAN_1/lib/python2.7/site-packages/torch/autograd/init.py", line 28, in _make_grads
raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs
The text was updated successfully, but these errors were encountered: