-
-
Notifications
You must be signed in to change notification settings - Fork 894
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[docs] Update README Quickstart to use CLI (#2137)
* update quickstart for new CLI * add blurb about bleeding edge builds * missed a yaml reference * prefer lora over qlora for examples * fix commands for parity with previous instructions * consistency on pip/pip3 install * one more parity pip=>pip3 * remove extraneous options in example yaml Co-authored-by: NanoCode012 <[email protected]> * update copy * update badges and for discord and socials in readme * Fix a few broken links * bump version to 0.6.0 for release --------- Co-authored-by: NanoCode012 <[email protected]>
- Loading branch information
1 parent
ab4b321
commit 34d3c8d
Showing
4 changed files
with
136 additions
and
37 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
base_model: NousResearch/Llama-3.2-1B | ||
|
||
load_in_8bit: false | ||
load_in_4bit: false | ||
strict: false | ||
|
||
datasets: | ||
- path: teknium/GPT4-LLM-Cleaned | ||
type: alpaca | ||
dataset_prepared_path: last_run_prepared | ||
val_set_size: 0.1 | ||
output_dir: ./outputs/lora-out | ||
|
||
adapter: lora | ||
lora_model_dir: | ||
|
||
sequence_len: 2048 | ||
sample_packing: true | ||
eval_sample_packing: true | ||
pad_to_sequence_len: true | ||
|
||
lora_r: 16 | ||
lora_alpha: 32 | ||
lora_dropout: 0.05 | ||
lora_fan_in_fan_out: | ||
lora_target_modules: | ||
- gate_proj | ||
- down_proj | ||
- up_proj | ||
- q_proj | ||
- v_proj | ||
- k_proj | ||
- o_proj | ||
|
||
wandb_project: | ||
wandb_entity: | ||
wandb_watch: | ||
wandb_name: | ||
wandb_log_model: | ||
|
||
gradient_accumulation_steps: 2 | ||
micro_batch_size: 2 | ||
num_epochs: 1 | ||
optimizer: adamw_8bit | ||
lr_scheduler: cosine | ||
learning_rate: 0.0002 | ||
|
||
train_on_inputs: false | ||
group_by_length: false | ||
bf16: auto | ||
fp16: | ||
tf32: false | ||
|
||
gradient_checkpointing: true | ||
early_stopping_patience: | ||
resume_from_checkpoint: | ||
local_rank: | ||
logging_steps: 1 | ||
xformers_attention: | ||
flash_attention: true | ||
|
||
loss_watchdog_threshold: 5.0 | ||
loss_watchdog_patience: 3 | ||
|
||
warmup_steps: 10 | ||
evals_per_epoch: 4 | ||
saves_per_epoch: 1 | ||
debug: | ||
deepspeed: | ||
weight_decay: 0.0 | ||
fsdp: | ||
fsdp_config: | ||
special_tokens: | ||
pad_token: "<|end_of_text|>" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,3 @@ | ||
"""Axolotl - Train and fine-tune large language models""" | ||
|
||
__version__ = "0.5.3.dev0" | ||
__version__ = "0.6.0" |