Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Time #2

Open
sarapieri opened this issue Jun 17, 2023 · 0 comments
Open

Training Time #2

sarapieri opened this issue Jun 17, 2023 · 0 comments

Comments

@sarapieri
Copy link

sarapieri commented Jun 17, 2023

Training Time

Hi,
I was wondering what setting the training time of 2 hours that you report in the paper refers to:

image

Is this the training time of fine-tuning lora-shepherd-7b for one client only? Or what is the setting in this case?

The implementation provided with 5 clients per round (0.05 participation over 100 clients in total on Databricks-dolly-15k) and 20 communications rounds on one single GPU NVIDIA A100 is around 13.5 hours on my side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant