This repository has been archived by the owner on Nov 1, 2024. It is now read-only.
fix: ensure last checkpoint is always saved, refactor training stop conditions to be computed in single location #729
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issues
1 Inconsistent checkpoint filenames saved by trainer
In our pipeline we often have sequence of steps such as (train, reshard/unflatten, evaluate). The output files of the training become inputs to the resharding scripts. In order for the execution to work reliably the output files need to have consistent filenames, such as
checkpoint_last-model_part-0-shard0.pt
When running metaseq.cli.train with tasks such as streaming_finetune_language_modeling there are two different stopping conditions set by --max-epochs and --max-updates. Whichever limit is hit first will cause the model stop training.
The issue is that checkpoint_last-* file is ONLY written the epoch stop condition or update stop conditions were false.
This couples the checkpoint filename with the stopping conditions
Notice
checkpoints[0]
only uses the FIRST true filename/conditionmetaseq/metaseq/checkpoint_utils.py
Lines 89 to 99 in c16d210
Goal
We want to be able to run the jobs/pipeline and change the stopping conditions without implicitly changing the output file that will be given to the subsequent commands / scripts
2 Training Stop was Handled in Multiple Locations
Loop condition:
metaseq/metaseq/cli/train.py
Line 209 in c16d210
Loop break:
metaseq/metaseq/cli/train.py
Lines 212 to 213 in c16d210
This makes it harder to reason about which condition will cause training to stop.
Solution
validate_and_save
andshould_stop
>
instead of>=
conditionscheckpoint_last*
fileTesting
I wasn't able to test since this is merging with metaseq main instead of our fork's main.
I wanted to at least share the ideas. Although changing training stop conditions can be serious, so maybe someone else can submit a small jobs to test. One with max-epochs, other with max-updates, and in both cases it saves checkpoint_last files
Related to #726