Skip to content

Commit

Permalink
Defaulting keep_torch_compile to true.
Browse files Browse the repository at this point in the history
  • Loading branch information
ggoggam committed Dec 15, 2024
1 parent 28fecc2 commit 1c754d0
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/accelerate/accelerator.py
Original file line number Diff line number Diff line change
Expand Up @@ -2601,7 +2601,7 @@ def pad_across_processes(self, tensor, dim=0, pad_index=0, pad_first=False):
"""
return pad_across_processes(tensor, dim=dim, pad_index=pad_index, pad_first=pad_first)

def unwrap_model(self, model, keep_fp32_wrapper: bool = True, keep_torch_compile: bool = False):
def unwrap_model(self, model, keep_fp32_wrapper: bool = True, keep_torch_compile: bool = True):
"""
Unwraps the `model` from the additional layer possible added by [`~Accelerator.prepare`]. Useful before saving
the model.
Expand All @@ -2611,7 +2611,7 @@ def unwrap_model(self, model, keep_fp32_wrapper: bool = True, keep_torch_compile
The model to unwrap.
keep_fp32_wrapper (`bool`, *optional*, defaults to `True`):
Whether to not remove the mixed precision hook if it was added.
keep_torch_compile (`bool`, *optional*, defaults to `False`):
keep_torch_compile (`bool`, *optional*, defaults to `True`):
Whether to not unwrap compiled model if compiled.
Returns:
`torch.nn.Module`: The unwrapped model.
Expand Down

0 comments on commit 1c754d0

Please sign in to comment.