Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Gemma chat template #1530

Merged
merged 3 commits into from
Apr 21, 2024

Conversation

Haoxiang-Wang
Copy link
Contributor

@Haoxiang-Wang Haoxiang-Wang commented Apr 17, 2024

Description

Supports Gemma chat template for SFT. Currently, when users specify type=gemma in their YAML file. e.g.,

base_model: google/gemma-2b-it
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

datasets:
  - path: HuggingFaceH4/ultrachat_200k
    conversation: gemma
    type: sharegpt.load_ultrachat
    split: "train_sft"
    train_on_split: "train_sft"

errors will occur, as the current version of axolotl doesn't support Gemma chat template as there is no if self.sep_style == SeparatorStyle.GEMMA in fastchat_conversation_turns.py

I added the following lines to fastchat_conversation_turns.py

    if self.sep_style == SeparatorStyle.GEMMA:
        if self.system_message:
            raise ValueError("Gemma chat template does not support system messages")
        for i, (role, message) in enumerate(self.messages):
            prefix = "<bos>" if i == 0 else ""
            message_str = message if message else ""
            yield prefix + "<start_of_turn>" + role + "\n", message_str + "<end_of_turn>\n"
        return

which will generate texts matching Gemma-instruct's chat template.

Besides, the FastChat latest version 0.2.36 doesn't support Gemma chat template, so I updated the requirements.txt to install the up-to-date FastChat from GitHub.

Motivation and Context

How has this been tested?

I tested it with

base_model: google/gemma-2b-it
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

datasets:
  - path: HuggingFaceH4/ultrachat_200k
    conversation: gemma
    type: sharegpt.load_ultrachat
    split: "train_sft"
    train_on_split: "train_sft"

and it worked well.

Screenshots (if appropriate)

Types of changes

Social Handles (Optional)

requirements.txt Outdated Show resolved Hide resolved
Copy link
Collaborator

@NanoCode012 NanoCode012 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you give me an example output from running cli.preprocess with --debug? Does the bos repeat?

@@ -28,7 +28,7 @@ scipy
scikit-learn==1.2.2
pynvml
art
fschat==0.2.36
fschat @ git+https://github.com/lm-sys/FastChat.git@5095615810cf613dba7f27dd155f571fcff976d8
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to use a version instead of commit?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Screenshot 2024-04-21 at 4 55 02 PM unfortunately, fastchat hasn't cut a release in a while too.

if self.system_message:
raise ValueError("Gemma chat template does not support system messages")
for i, (role, message) in enumerate(self.messages):
prefix = "<bos>" if i == 0 else ""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused about this. I don't think you need to add bos manually like this. Axolotl should prepend it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is Gemma specific, and this function doesn't have access to the tokenizer, I think this is fine for now. in other formats in this function, we have literal "" strings.

@winglian winglian force-pushed the gemma-chat-template branch from 8a9f1e3 to 9471994 Compare April 21, 2024 20:57
@winglian winglian merged commit 60f5ce0 into axolotl-ai-cloud:main Apr 21, 2024
7 checks passed
djsaunde pushed a commit that referenced this pull request Dec 17, 2024
* Add support for Gemma chat template

* Update fschat version to include its newest support for Gemma chat style

* pin fastchat to current HEAD

---------

Co-authored-by: Wing Lian <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants