Experimenting with Parameter Efficient Fine Tuning (PEFT) methods like LoRA, QLoRA to finetune Large Language Models (LLMs) The aim is to get an understanding of how LLMs function, learn pipelining of LLMs, and how models are fine-tuned to adapt to specific tasks and data
-
Notifications
You must be signed in to change notification settings - Fork 0
Experimenting with Parameter Efficient Fine Tuning (PEFT) methods like LoRA, QLoRA to finetune LLMs
License
SohamD34/Tuna-tuner
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Experimenting with Parameter Efficient Fine Tuning (PEFT) methods like LoRA, QLoRA to finetune LLMs
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published