-
Notifications
You must be signed in to change notification settings - Fork 535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doing the loss reduction in foundry instead of in the loss functions. #1079
Merged
ShashankMosaicML
merged 11 commits into
mosaicml:main
from
ShashankMosaicML:reduce_loss_in_foundry
Apr 2, 2024
Merged
Doing the loss reduction in foundry instead of in the loss functions. #1079
ShashankMosaicML
merged 11 commits into
mosaicml:main
from
ShashankMosaicML:reduce_loss_in_foundry
Apr 2, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ShashankMosaicML
requested review from
vchiley and
dakinggg
and removed request for
vchiley
April 1, 2024 23:40
vchiley
reviewed
Apr 1, 2024
vchiley
reviewed
Apr 1, 2024
vchiley
approved these changes
Apr 2, 2024
vchiley
approved these changes
Apr 2, 2024
vchiley
reviewed
Apr 2, 2024
vchiley
reviewed
Apr 2, 2024
dakinggg
added a commit
to dakinggg/llm-foundry
that referenced
this pull request
Apr 4, 2024
…mosaicml#1079) * setting loss_fn reduction to None * fixing a unit test * add error message * adding test to check reduction * adding test to check reduction * Update llmfoundry/models/mpt/modeling_mpt.py Co-authored-by: Vitaliy Chiley <[email protected]> * preserving batch dimension of targets * minor change --------- Co-authored-by: Vitaliy Chiley <[email protected]> Co-authored-by: Daniel King <[email protected]>
KuuCi
pushed a commit
that referenced
this pull request
Apr 18, 2024
…#1079) * setting loss_fn reduction to None * fixing a unit test * add error message * adding test to check reduction * adding test to check reduction * Update llmfoundry/models/mpt/modeling_mpt.py Co-authored-by: Vitaliy Chiley <[email protected]> * preserving batch dimension of targets * minor change --------- Co-authored-by: Vitaliy Chiley <[email protected]> Co-authored-by: Daniel King <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR gives us more flexibility to reduce losses in foundry in custom ways.
We see that the changes do not affect the MFU or the convergence for 125M and 7B models.
Memory consumption is also similar: