Skip to content

Commit

Permalink
update readme with updated cli commands
Browse files Browse the repository at this point in the history
  • Loading branch information
winglian committed Sep 13, 2023
1 parent b049704 commit 4d3ee48
Showing 1 changed file with 11 additions and 11 deletions.
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,11 +76,11 @@ pip3 install -e .[flash-attn]
pip3 install -U git+https://github.com/huggingface/peft.git

# finetune lora
accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml
accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml

# inference
accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml \
--inference --lora_model_dir="./lora-out"
accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \
--lora_model_dir="./lora-out"
```

## Installation
Expand Down Expand Up @@ -670,14 +670,14 @@ strict:

Run
```bash
accelerate launch scripts/finetune.py your_config.yml
accelerate launch -m axolotl.cli.train your_config.yml
```

#### Multi-GPU

You can optionally pre-tokenize dataset with the following before finetuning:
```bash
CUDA_VISIBLE_DEVICES="" accelerate ... --prepare_ds_only
CUDA_VISIBLE_DEVICES="" accelerate launch -m axolotl.cli.train your_config.yml --prepare_ds_only
```

##### Config
Expand Down Expand Up @@ -716,30 +716,30 @@ Pass the appropriate flag to the train command:

- Pretrained LORA:
```bash
--inference --lora_model_dir="./lora-output-dir"
python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"
```
- Full weights finetune:
```bash
--inference --base_model="./completed-model"
python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"
```
- Full weights finetune w/ a prompt from a text file:
```bash
cat /tmp/prompt.txt | python scripts/finetune.py configs/your_config.yml \
--base_model="./completed-model" --inference --prompter=None --load_in_8bit=True
cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
--base_model="./completed-model" --prompter=None --load_in_8bit=True
```

### Merge LORA to base

Add below flag to train command above

```bash
--merge_lora --lora_model_dir="./completed-model" --load_in_8bit=False --load_in_4bit=False
python3 -m axolotl.cli.merge_lora examples/your_config.yml --lora_model_dir="./completed-model" --load_in_8bit=False --load_in_4bit=False
```

If you run out of CUDA memory, you can try to merge in system RAM with

```bash
CUDA_VISIBLE_DEVICES="" python3 scripts/finetune.py ...
CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...
```

## Common Errors 🧰
Expand Down

0 comments on commit 4d3ee48

Please sign in to comment.