-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use it with Multi GPU #1
Comments
@Hesene Hello Hesene, in my lab I only have one single 2080Ti, therefore I cannot replicate this issue. I'm sorry about it! |
Ok, thank you for your code, it help me a lot |
I face the same problem. |
did you use torch.nn.DataParallel()? |
no I didn't, but I think it may work |
I'm not sure, but I think you can try to integrate |
I use efficientnet as backbone to trian a object detection model, and the nn.DataParallel() works fine, the only issue is the speed of multi gpu is quit slow |
I'm seeing a similar issue when running with nn.DataParallel:
Any ideas? Thanks! |
Hi, bro. |
I suspect that this problem is due to the sharing of a certain module in Efficientunet, which results in this module being only on one GPU, perhaps the encoder…… |
I agree, I'm now facing the same problem. |
@NPU-Franklin Franklin created a PR (#11 ) to support multi GPUs. I do not have multi cards therefore I cannot test it. But maybe you can give it a try. |
Thank you for your sharing!!! when I run with single GPU,it runs well, but when I run with multi GPU, it occur error
RuntimeError: Function CatBackward returned an invalid gradient at index 1 - expected device cuda:1 but got cuda:0
could you give some advice on this error?
The text was updated successfully, but these errors were encountered: