-
-
Notifications
You must be signed in to change notification settings - Fork 899
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Gemma chat template #1530
Add support for Gemma chat template #1530
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you give me an example output from running cli.preprocess
with --debug
? Does the bos repeat?
@@ -28,7 +28,7 @@ scipy | |||
scikit-learn==1.2.2 | |||
pynvml | |||
art | |||
fschat==0.2.36 | |||
fschat @ git+https://github.com/lm-sys/FastChat.git@5095615810cf613dba7f27dd155f571fcff976d8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to use a version instead of commit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if self.system_message: | ||
raise ValueError("Gemma chat template does not support system messages") | ||
for i, (role, message) in enumerate(self.messages): | ||
prefix = "<bos>" if i == 0 else "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit confused about this. I don't think you need to add bos
manually like this. Axolotl should prepend it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since this is Gemma specific, and this function doesn't have access to the tokenizer, I think this is fine for now. in other formats in this function, we have literal "" strings.
8a9f1e3
to
9471994
Compare
* Add support for Gemma chat template * Update fschat version to include its newest support for Gemma chat style * pin fastchat to current HEAD --------- Co-authored-by: Wing Lian <[email protected]>
Description
Supports Gemma chat template for SFT. Currently, when users specify
type=gemma
in their YAML file. e.g.,errors will occur, as the current version of axolotl doesn't support Gemma chat template as there is no
if self.sep_style == SeparatorStyle.GEMMA
in fastchat_conversation_turns.pyI added the following lines to fastchat_conversation_turns.py
which will generate texts matching Gemma-instruct's chat template.
Besides, the FastChat latest version 0.2.36 doesn't support Gemma chat template, so I updated the
requirements.txt
to install the up-to-date FastChat from GitHub.Motivation and Context
How has this been tested?
I tested it with
and it worked well.
Screenshots (if appropriate)
Types of changes
Social Handles (Optional)