Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime error in trainning with ddp, Expected to mark a variable ready only once #122

Open
derby-ding opened this issue Dec 24, 2024 · 2 comments

Comments

@derby-ding
Copy link

Hi everyone, while trying to train boltz with the official processed data, we got runtime error. The command we used was "TORCH_DISTRIBUTED_DEBUG=DETAIL CUDA_VISIBLE_DEVICES=2,3 python scripts/train/train.py scripts/train/configs/structure.yaml find_unused_parameters=True".

RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
[rank0]: Parameter at index 3633 with name structure_module.score_model.atom_attention_decoder.atom_decoder.diffusion_transformer.layers.2.transition.b_to_a.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.

@derby-ding derby-ding changed the title runtime error Expected to mark a variable ready only once runtime error in trainning with ddp, Expected to mark a variable ready only once Dec 24, 2024
@derby-ding
Copy link
Author

Solved by modify train.py as suggested in the error information, strategy = DDPStrategy(find_unused_parameters=cfg.find_unused_parameters, static_graph=True). Not sure whether it is a right way.

@gcorso
Copy link
Collaborator

gcorso commented Dec 27, 2024

There were a couple of issues with the config files, could you verify this still occurs with the latest configs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants