-
Notifications
You must be signed in to change notification settings - Fork 27.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactoring Trainer, adds save_only_model
arg and simplifying FSDP integration
#27652
Conversation
1. Refactor FSDP 2. Add `--save_only_model` option: When checkpointing, whether to only save the model, or also the optimizer, scheduler & rng state. 3. Bump up the minimum `accelerate` version to `0.21.0`
The documentation is not available anymore as the PR was closed or merged. |
This reverts commit 149330a.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a million! Looks great to me, just one tiny doc nit for consistency
Co-authored-by: Zach Mueller <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice refactor and simplification of the code. I have a few comments, please take a look.
@@ -2462,12 +2396,49 @@ def _save_checkpoint(self, model, trial, metrics=None): | |||
else: | |||
torch.save(rng_states, os.path.join(output_dir, f"rng_state_{self.args.process_index}.pth")) | |||
|
|||
if self.args.push_to_hub: | |||
self._push_from_checkpoint(output_dir) | |||
def _save_optimizer_and_scheduler(self, output_dir): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, it would be great if we could reuse accelerate code here, but it seems that there is no directly corresponding code. I wonder if we could achieve that with a bit of refactoring on the accelerate side @muellerzr (potentially in a future PR).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked into this the last time during migration to Accelerate backend as well and recently, I couldn't find a clean way of doing it on the Accelerate side. Maybe a discussion on this offline would help.
Co-authored-by: Benjamin Bossan <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the refactor - looks a lot cleaner!
Regarding the accelerate bump up we should make sure everything's compatible in the library with this change. @ydshieh I believe the CI images should have the version reflected in the setup.py
and modifying that file triggers a run on everything - is that right? Is there anything else we should check before merging?
@@ -452,14 +459,18 @@ class TrainingArguments: | |||
FSDP's limit_all_gathers (useful only when `fsdp` field is passed). | |||
If `"True"`, FSDP explicitly synchronizes the CPU thread to prevent too many in-flight | |||
all-gathers. | |||
- use_orig_params (`bool`, *optional*, defaults to `False`) | |||
- use_orig_params (`bool`, *optional*, defaults to `True`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was this set to False
by default before - is there an advantage to not having this enabled? Asking in case this introduces a degraded experience for some users
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello, this is required now for simplifying the FSDP integration. Please find the explanation in the corresponding Accelerate PR: huggingface/accelerate#2177 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry - meant to hit approve. Happy to marge after @BenjaminBossan's approval and @ydshieh confirms checking the library versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates. LGTM
Hi, The (Circle)CI did run all the tests: for example, in
The CI uses the latest
Nothing extra to check. |
save_only_model is a nice feature indeed, but it does not work together with load_best_model_at_end (at least with deepspeed enabled), since the final model cannot be loaded from the checkpoint. |
What does this PR do?
save_only_model
arg - This enables the feature request Add an option to decide whether to store the checkpoint and rng_state. #26706a. Currently, FSDP-XLA logic is custom in Trainer and normal FSDP is using the Accelerate's integration. There were many zombie code snippets related to normal FSDP. Cleaned those.
b. Made it easier to train with FSDP. When using
FULL_STATE_DICT
setting, it should now save the model in transformers format using the default safetensors sharded format. This reduces the burden on users to later load, shard and save in safetensors format.c. Should fix Fine tuning with examples/pytorch/language-modeling/run_clm.py on torch/XLA + FSDP produce abnormal models #27432 but don't have access to TPUs to test this.
d. Fixes NotImplementedError: Cannot copy out of meta tensor; no data! #27166
e. This is built upon the PR in Accelerate to simplify FSDP integration fsdp refactoring accelerate#2177. It should be merged first.