We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
现在放出的微调接口是Lora微调的,是否可以在训练资源充足的情况下进行全参微调呢?是不是注释掉
peft_lora.py中的model = get_peft_model(model, peft_config),就可以了?
peft_lora.py
model = get_peft_model(model, peft_config)
The text was updated successfully, but these errors were encountered:
全参微调现在的环境带不动,这个方案带不动全参,显存不够
Sorry, something went wrong.
我这边资源应该是够的,所以是不是注释掉peft_lora.py中的model = get_peft_model(model, peft_config),就可以了?
zRzRzRzRzRzRzR
No branches or pull requests
现在放出的微调接口是Lora微调的,是否可以在训练资源充足的情况下进行全参微调呢?是不是注释掉
peft_lora.py
中的model = get_peft_model(model, peft_config)
,就可以了?The text was updated successfully, but these errors were encountered: