Skip to content
This repository has been archived by the owner on Dec 8, 2024. It is now read-only.

Gradient computation error #161

Open
siaavashZ opened this issue May 19, 2022 · 2 comments
Open

Gradient computation error #161

siaavashZ opened this issue May 19, 2022 · 2 comments

Comments

@siaavashZ
Copy link

siaavashZ commented May 19, 2022

Hello
When I run the train.py module, I get this error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [3, 3]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

@mengfei25
Copy link

met this issue too, is there any solutions?

@siaavashZ
Copy link
Author

siaavashZ commented Jun 18, 2022

Hello,
I solved this
In the models directory, you should change lines 178 and 180 of the bifpn.py module.

Just replace this

w1 /= torch.sum(w1, dim=0) + self.eps  # normalize

w2 /= torch.sum(w2, dim=0) + self.eps  # normalize

to this:

w1 = w1/(torch.sum(w1, dim=0) + self.eps)  # normalize

w2 = w2/(torch.sum(w2, dim=0) + self.eps)  # normalize

@siaavashZ siaavashZ changed the title gradient computation error Gradient computation error Jun 18, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants