You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for the effort and commitment to the scalable GNN, the memory issue really bugs me sometimes.
The autoscale method seems a cool approach to handle the above issue and if it supports the multi-gpu, the training speed is faster than ever!
It seems hard to consider the multi-gpu option, but I ask you all just in case :)
The text was updated successfully, but these errors were encountered:
Thanks for your interest and I am glad that you like it. I will try to add multi-GPU support if I have some free-time, but it shouldn't be too hard to add. The major thing to take care of is that replicated models do hold shared histories instead of individual ones, and that synchronization of pushing and pulling histories is synchronized over multiple models as well.
Hi, thanks for the effort and commitment to the scalable GNN, the memory issue really bugs me sometimes.
The autoscale method seems a cool approach to handle the above issue and if it supports the multi-gpu, the training speed is faster than ever!
It seems hard to consider the multi-gpu option, but I ask you all just in case :)
The text was updated successfully, but these errors were encountered: