You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
During Step 2 - Reward Model of DeepSpeed-Chat, an AssertionError occurs in the backward process for ZeRO stage 3 if gradient_checkpointing is enabled, while it works if gradient_checkpointing is disabled
Log output
Traceback (most recent call last):
File "run_bloom.py", line 49, in<module>main()
File "run_bloom.py", line 45, in main
trainer.train()
File "trainer.py", line 177, in train
model.backward(loss)
File "/miniconda3/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/miniconda3/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1929, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/miniconda3/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/miniconda3/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 2094, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/miniconda3/lib/python3.9/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/miniconda3/lib/python3.9/site-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/miniconda3/lib/python3.9/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/miniconda3/lib/python3.9/site-packages/torch/autograd/function.py", line 253, in apply
return user_fn(self, *args)
File "/miniconda3/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 146, in backward
torch.autograd.backward(outputs_with_grad, args_with_grad)
File "/miniconda3/lib/python3.9/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/miniconda3/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/miniconda3/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 1074, in reduce_partition_and_remove_grads
self.reduce_ready_partitions_and_remove_grads(param, i)
File "/miniconda3/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 1369, in reduce_ready_partitions_and_remove_grads
self.reduce_independent_p_g_buckets_and_remove_grads(param, i)
File "/miniconda3/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 1109, in reduce_independent_p_g_buckets_and_remove_grads
self.__reduce_and_partition_ipg_grads()
File "/miniconda3/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/miniconda3/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/miniconda3/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 1146, in __reduce_and_partition_ipg_grads
assert len(set(p.ds_id forpin self.params_in_ipg_bucket)) == len(self.params_in_ipg_bucket)
AssertionError
I've added some print before the assertion in stage3.py
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-devel package with yum
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0
[WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch version .................... 2.0.1
deepspeed info ................... 0.11.1, unknown, unknown
torch cuda version ............... 11.8
torch hip version ................ None
nvcc version ..................... 11.1
deepspeed wheel compiled w. ...... torch 2.0, cuda 11.8
shared memory (/dev/shm) size .... 125.87 GB
The text was updated successfully, but these errors were encountered:
bcol23
changed the title
[BUG] params_in_ipg_bucket AssertionError in backward for stage 3
[BUG] params_in_ipg_bucket AssertionError in backward if gradient_checkpointing is enabled
Oct 17, 2023
Describe the bug
During Step 2 - Reward Model of DeepSpeed-Chat, an AssertionError occurs in the backward process for ZeRO stage 3 if gradient_checkpointing is enabled, while it works if gradient_checkpointing is disabled
Log output
I've added some print before the assertion in stage3.py
ds_report output
The text was updated successfully, but these errors were encountered: