-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Proposal] Distributed Training for FL #330
Comments
We didn't provide multiple GPUs in the standalone module. |
We define the following variables to further illustrate the idea:
When K == N, each selected client is allocated to a GPU to train. When K > N, multiple clients are allocated to a GPU, then they execute training sequentially in the GPU. When K < N, you can adjust to use fewer GPUs in training. We need to set the number of GPUs in The implementation is under working. Anybody would like to help? |
I'm very interested in the function you mentioned. Is there any code available that can implement this function? |
As the title described, does standalone mode support multiple GPUs to speed up training?
The text was updated successfully, but these errors were encountered: