We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to benchmark training with gradient checkpointing enabled?
I tried to pass "gradient_checkpointing": True, "gradient_checkpointing_kwargs": {"use_reentrant": True},
as training_arguments of TrainingConfig but it doesn't have any effect it seems.
The text was updated successfully, but these errors were encountered:
Everything in training_arguments is passed directly to the TrainingArguments instance, which is passed to the trainer.
training_arguments
TrainingArguments
Sorry, something went wrong.
Thanks for your reply. This is what I was doing. I'll try again. If it still doesn't work, I'll share my full code.
No branches or pull requests
Is there a way to benchmark training with gradient checkpointing enabled?
I tried to pass
"gradient_checkpointing": True,
"gradient_checkpointing_kwargs": {"use_reentrant": True},
as training_arguments of TrainingConfig but it doesn't have any effect it seems.
The text was updated successfully, but these errors were encountered: