You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the scripts in the examples/ folder of Accelerate or an officially supported no_trainer script in the examples folder of the transformers repo (such as run_no_trainer_glue.py)
My own task or dataset (give details below)
Reproduction
When the model is wrapped in both distributed wrapper (e.g. DistributedDataParallel or DeepSpeedEngine) and compiled module (e.g. OptimizedModule), calling Accelerator.unwrap_model should return a fully unwrapped model (torch.nn.Module). Instead, it currently returns the distributed wrapper.
System Info
Information
Tasks
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
)Reproduction
When the model is wrapped in both distributed wrapper (e.g.
DistributedDataParallel
orDeepSpeedEngine
) and compiled module (e.g.OptimizedModule
), callingAccelerator.unwrap_model
should return a fully unwrapped model (torch.nn.Module
). Instead, it currently returns the distributed wrapper.Expected behavior
Accelerator.unwrap_model
should return fully unwrapped model, i.e.torch.nn.Module
.The text was updated successfully, but these errors were encountered: