Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Sequence-Level KD #2220

Merged
merged 8 commits into from
Oct 11, 2024
Merged

Add Sequence-Level KD #2220

merged 8 commits into from
Oct 11, 2024

Conversation

mst272
Copy link
Contributor

@mst272 mst272 commented Oct 11, 2024

What does this PR do?

In the original paper, they compared Sequence-Level KD 、Supervised KD and GKD (On-policy). In trl GKDTrainer, Supervised KD and GKD have been implemented. So i add Sequence-Level KD in GKDTrainer, control whether to perform Sequence-Level KD through a seq_kd parameter.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@kashif
Copy link
Collaborator

kashif commented Oct 11, 2024

thanks @mst272 can you also kindly add these options to the docs-strings and the documentation of GKDTrainer?

@mst272
Copy link
Contributor Author

mst272 commented Oct 11, 2024

hi@kashif,I've added these to the docs-strings and the documentation

trl/trainer/gkd_config.py Outdated Show resolved Hide resolved
docs/source/gkd_trainer.md Outdated Show resolved Hide resolved
trl/trainer/gkd_config.py Outdated Show resolved Hide resolved
Copy link
Member

@qgallouedec qgallouedec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I leave the last word to @kashif

Co-authored-by: Quentin Gallouédec <[email protected]>
@kashif kashif merged commit 7f0d246 into huggingface:main Oct 11, 2024
8 of 9 checks passed
@moussaKam
Copy link

moussaKam commented Nov 5, 2024

Hi there,

I don't really understand how this PR adds seq_kd. To my understanding the seq_kd computes the standard cross_entropy between student logits and the output generated by the teacher.

In this PR we are simply generating the teacher output then computing the same generalized_jsd_loss. Am I missing something ?

In the documentation it says:

seq_kd: controls whether to perform Sequence-Level KD (can be viewed as supervised FT on teacher-generated out). When seq_kd=True and lmbda=0.0, the loss reduces to supervised JSD, where the teacher generates output sequences and the student receives token-specific feedback on these sequences from the teacher.

But this is the definition of the supervised_kd if I understand correctly.

@kashif
Copy link
Collaborator

kashif commented Nov 5, 2024

@moussaKam So recall there are 2 other parameters apart from the seq_kd flag, namely the lmbda so set that to zero and then the beta that interpolates between the forward and reverse KL. SeqKD then would be lmbda=0, and beta=0 if i am not mistaken... its an interesting question and might need exploring, what happens with other values for these 2 hyper-parameters... or do you mean that to be exact we would need to replace the KL div by the cross-entropy loss?

@kashif
Copy link
Collaborator

kashif commented Nov 5, 2024

@moussaKam also note that in this case, the KL-div is the same as the CE and a constant term, i.e. the entropy of the target which we assume does not change

@moussaKam
Copy link

@kashif thanks for you're reply, yes according to the definition from the paper

Sequence-Level KD (Kim & Rush, 2016). SeqKD maximizes the likelihood of high probability sequences generated by the teacher, and can be viewed as supervised FT on teacher-generated output.

I understand that we compute the cross-entropy in the case of seq_kd. This is what we do in standard sft no?

@moussaKam
Copy link

@kashif another point, if seq_kd is put to True we are computing the teacher inference twice. here and here. Do we really need that?

@kashif
Copy link
Collaborator

kashif commented Nov 5, 2024

@moussaKam so in the first we generate completions and in the 2nd we calculate the logits of the completions... i suppose we could do that once, and then keep track of it with a bunch of if-else but opted for some cleaner logic here that could work for any of the different hyperparams... any ideas on how to make it a bit more dry?

@kashif
Copy link
Collaborator

kashif commented Nov 5, 2024

@moussaKam Mind you there is an orthogonal abstraction i have been working on where instead of the logits (which are assumed to come from the same vocab. size for both the student and teacher) we allow for the student-teacher to have different vocabs: see #2263 and I would welcome any thoughts if this should be a separate class?

@moussaKam
Copy link

@kashif, we don't need to compute the logits, we generate the output with the teacher which becomes the new labels, then we run the forward of the student and compute the cross entropy using just the teacher output tokens.

I can implement it in the afternoon if it sounds good for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants