Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is Llama 2 compatible for finetuning using lora? #482

Closed
palash04 opened this issue Jul 22, 2023 · 4 comments
Closed

Is Llama 2 compatible for finetuning using lora? #482

palash04 opened this issue Jul 22, 2023 · 4 comments
Assignees
Labels
bug Something isn't working

Comments

@palash04
Copy link

I am trying to finetune llama2 using lora.

Following are the changes I made in yaml

model:
  name: hf_causal_lm
  pretrained_model_name_or_path: meta-llama/Llama-2-7b-hf
  init_device: mixed
  pretrained: true
  tie_word_embeddings: true
  use_auth_token: true 

# LORA
lora:
  args:
    r: 16
    lora_alpha: 32
    lora_dropout: 0.05
    target_modules: ['Wqkv']

# Tokenizer
tokenizer:
  name: meta-llama/Llama-2-7b-hf
  kwargs:
    model_max_length: ${max_seq_len}

Getting following error:
You are using a model of type llama to instantiate a model of type mpt. This is not supported for all configurations of models and can yield errors.

Screenshot 2023-07-22 at 6 35 26 PM
@palash04 palash04 added the bug Something isn't working label Jul 22, 2023
@creatorrr
Copy link

Not working for me :(

@creatorrr
Copy link

@danbider @dakinggg any pointers on how I can take a crack at implementing this?

@palash04
Copy link
Author

palash04 commented Aug 2, 2023

@Creatorr, this is being fixed here - 435

@dakinggg
Copy link
Collaborator

dakinggg commented Feb 2, 2024

Closed by #886, this should work now :)

@dakinggg dakinggg closed this as completed Feb 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants