Multi-gpu parallel training #343
Replies: 3 comments 3 replies
-
I don't think multi-GPU usage exists currently, but one could presumably spin up two instances on two different GPUs. |
Beta Was this translation helpful? Give feedback.
-
NVIDIA/cutlass#43 is the closest upstream CUTLASS issue I can find about their routines' support. FullyFusedMLP is also per-GPU, to the best of my knowledge. Regarding the training effect, I believe FullyFusedMLP (the faster approach) requires Tensor cores, but CUTLASS shouldn't need them. |
Beta Was this translation helpful? Give feedback.
-
Does Instant-ngp supports Multi-GPU Training phase [Colmap + Neural network training]? I'm bit confused as in FAQs it says only VR Rendering is supported for Multi-GPU.. |
Beta Was this translation helpful? Give feedback.
-
I have two 1080ti, but no 3090, and the following error is reported at runtime:
Got cutlass error: Error Internal at: 363
Could not free memory: F:\Tutorial\ngp\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/gpu_memory.h:458 cudaDeviceSynchronize() failed with error operation not permitted when stream is capturing
How can I configure two GPUs for parallel training? I can only run it when the aabb parameter is set to 4. Can I get the same training effect as the 3090 with two 1080ti?
Beta Was this translation helpful? Give feedback.
All reactions