Skip to content

Commit

Permalink
Feat: update doc (#1475) [skip ci]
Browse files Browse the repository at this point in the history
* feat: update doc contents

* chore: move batch vs ga docs

* feat: update lambdalabs instructions

* fix: refactor dev instructions
  • Loading branch information
NanoCode012 authored Apr 4, 2024
1 parent 5760099 commit c2b64e4
Show file tree
Hide file tree
Showing 6 changed files with 116 additions and 113 deletions.
84 changes: 6 additions & 78 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,23 +221,17 @@ For cloud GPU providers that support docker images, use [`winglian/axolotl-cloud
python get-pip.py
```

3. Install torch
```bash
pip3 install -U torch --index-url https://download.pytorch.org/whl/cu118
```
3. Install Pytorch https://pytorch.org/get-started/locally/

4. Axolotl
```bash
git clone https://github.com/OpenAccess-AI-Collective/axolotl
cd axolotl
4. Follow instructions on quickstart.

pip3 install packaging
pip3 install -e '.[flash-attn,deepspeed]'
5. Run
```bash
pip3 install protobuf==3.20.3
pip3 install -U --ignore-installed requests Pillow psutil scipy
```

5. Set path
6. Set path
```bash
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
```
Expand Down Expand Up @@ -389,66 +383,6 @@ See [examples](examples) for quick start. It is recommended to duplicate and mod
See [these docs](docs/config.qmd) for all config options.
<details>
<summary> Understanding of batch size and gradient accumulation steps </summary>
<br/>
Gradient accumulation means accumulating gradients over several mini-batches and updating the model weights afterward. When the samples in each batch are diverse, this technique doesn't significantly impact learning.
This method allows for effective training with larger effective batch sizes without needing proportionally larger memory. Here's why:
1. **Memory Consumption with Batch Size**: The primary reason increasing the batch size impacts memory is due to the storage requirements for intermediate activations. When you forward propagate a batch through a network, you have to store the activations at each layer for each sample in the batch, because these activations are used during backpropagation to compute gradients. Therefore, larger batches mean more activations, leading to greater GPU memory consumption.
2. **Gradient Accumulation**: With gradient accumulation, you're effectively simulating a larger batch size by accumulating gradients over several smaller batches (or micro-batches). However, at any given time, you're only forward and backward propagating a micro-batch. This means you only store activations for the micro-batch, not the full accumulated batch. As a result, you can simulate the effect of a larger batch size without the memory cost of storing activations for a large batch.
**Example 1:**
Micro batch size: 3
Gradient accumulation steps: 2
Number of GPUs: 3
Total batch size = 3 * 2 * 3 = 18
```
| GPU 1 | GPU 2 | GPU 3 |
|----------------|----------------|----------------|
| S1, S2, S3 | S4, S5, S6 | S7, S8, S9 |
| e1, e2, e3 | e4, e5, e6 | e7, e8, e9 |
|----------------|----------------|----------------|
| → (accumulate) | → (accumulate) | → (accumulate) |
|----------------|----------------|----------------|
| S10, S11, S12 | S13, S14, S15 | S16, S17, S18 |
| e10, e11, e12 | e13, e14, e15 | e16, e17, e18 |
|----------------|----------------|----------------|
| → (apply) | → (apply) | → (apply) |

Accumulated gradient for the weight w1 after the second iteration (considering all GPUs):
Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6 + e7 + e8 + e9 + e10 + e11 + e12 + e13 + e14 + e15 + e16 + e17 + e18

Weight update for w1:
w1_new = w1_old - learning rate x (Total gradient for w1 / 18)
```

**Example 2:**
Micro batch size: 2
Gradient accumulation steps: 1
Number of GPUs: 3
Total batch size = 2 * 1 * 3 = 6

```
| GPU 1 | GPU 2 | GPU 3 |
|-----------|-----------|-----------|
| S1, S2 | S3, S4 | S5, S6 |
| e1, e2 | e3, e4 | e5, e6 |
|-----------|-----------|-----------|
| → (apply) | → (apply) | → (apply) |
Accumulated gradient for the weight w1 (considering all GPUs):
Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6
Weight update for w1:
w1_new = w1_old - learning rate × (Total gradient for w1 / 6)
```

</details>

### Train
Run
Expand Down Expand Up @@ -678,14 +612,8 @@ Bugs? Please check the [open issues](https://github.com/OpenAccess-AI-Collective

PRs are **greatly welcome**!

Please run below to setup env
Please run the quickstart instructions followed by the below to setup env:
```bash
git clone https://github.com/OpenAccess-AI-Collective/axolotl
cd axolotl
pip3 install packaging ninja
pip3 install -e '.[flash-attn,deepspeed]'
pip3 install -r requirements-dev.txt -r requirements-tests.txt
pre-commit install
Expand Down
59 changes: 59 additions & 0 deletions docs/batch_vs_grad.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
title: Batch size vs Gradient accumulation
description: Understanding of batch size and gradient accumulation steps
---

Gradient accumulation means accumulating gradients over several mini-batches and updating the model weights afterward. When the samples in each batch are diverse, this technique doesn't significantly impact learning.

This method allows for effective training with larger effective batch sizes without needing proportionally larger memory. Here's why:

1. **Memory Consumption with Batch Size**: The primary reason increasing the batch size impacts memory is due to the storage requirements for intermediate activations. When you forward propagate a batch through a network, you have to store the activations at each layer for each sample in the batch, because these activations are used during backpropagation to compute gradients. Therefore, larger batches mean more activations, leading to greater GPU memory consumption.

2. **Gradient Accumulation**: With gradient accumulation, you're effectively simulating a larger batch size by accumulating gradients over several smaller batches (or micro-batches). However, at any given time, you're only forward and backward propagating a micro-batch. This means you only store activations for the micro-batch, not the full accumulated batch. As a result, you can simulate the effect of a larger batch size without the memory cost of storing activations for a large batch.

**Example 1:**
Micro batch size: 3
Gradient accumulation steps: 2
Number of GPUs: 3
Total batch size = 3 * 2 * 3 = 18

```
| GPU 1 | GPU 2 | GPU 3 |
|----------------|----------------|----------------|
| S1, S2, S3 | S4, S5, S6 | S7, S8, S9 |
| e1, e2, e3 | e4, e5, e6 | e7, e8, e9 |
|----------------|----------------|----------------|
| → (accumulate) | → (accumulate) | → (accumulate) |
|----------------|----------------|----------------|
| S10, S11, S12 | S13, S14, S15 | S16, S17, S18 |
| e10, e11, e12 | e13, e14, e15 | e16, e17, e18 |
|----------------|----------------|----------------|
| → (apply) | → (apply) | → (apply) |
Accumulated gradient for the weight w1 after the second iteration (considering all GPUs):
Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6 + e7 + e8 + e9 + e10 + e11 + e12 + e13 + e14 + e15 + e16 + e17 + e18
Weight update for w1:
w1_new = w1_old - learning rate x (Total gradient for w1 / 18)
```

**Example 2:**
Micro batch size: 2
Gradient accumulation steps: 1
Number of GPUs: 3
Total batch size = 2 * 1 * 3 = 6

```
| GPU 1 | GPU 2 | GPU 3 |
|-----------|-----------|-----------|
| S1, S2 | S3, S4 | S5, S6 |
| e1, e2 | e3, e4 | e5, e6 |
|-----------|-----------|-----------|
| → (apply) | → (apply) | → (apply) |
Accumulated gradient for the weight w1 (considering all GPUs):
Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6
Weight update for w1:
w1_new = w1_old - learning rate × (Total gradient for w1 / 6)
```
58 changes: 25 additions & 33 deletions docs/dataset-formats/conversation.qmd
Original file line number Diff line number Diff line change
@@ -1,71 +1,63 @@
---
title: Conversation
description: Conversation format for supervised fine-tuning.
order: 1
order: 3
---

## Formats

### sharegpt
## sharegpt

conversations where `from` is `human`/`gpt`. (optional: first row with role `system` to override default system prompt)

```{.json filename="data.jsonl"}
{"conversations": [{"from": "...", "value": "..."}]}
```

Note: `type: sharegpt` opens a special config `conversation:` that enables conversions to many Conversation types. See [the docs](../docs/config.qmd) for all config options.
Note: `type: sharegpt` opens special configs:
- `conversation`: enables conversions to many Conversation types. Refer to the 'name' [here](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py) for options.
- `roles`: allows you to specify the roles for input and output. This is useful for datasets with custom roles such as `tool` etc to support masking.
- `field_human`: specify the key to use instead of `human` in the conversation.
- `field_model`: specify the key to use instead of `gpt` in the conversation.

```yaml
datasets:
path: ...
type: sharegpt

conversation: # Options (see Conversation 'name'): https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py
field_human: # Optional[str]. Human key to use for conversation.
field_model: # Optional[str]. Assistant key to use for conversation.
# Add additional keys from your dataset as input or output roles
roles:
input: # Optional[List[str]]. These will be masked based on train_on_input
output: # Optional[List[str]].
```
### pygmalion
## pygmalion
```{.json filename="data.jsonl"}
{"conversations": [{"role": "...", "value": "..."}]}
```

### sharegpt.load_role
## sharegpt.load_role

conversations where `role` is used instead of `from`

```{.json filename="data.jsonl"}
{"conversations": [{"role": "...", "value": "..."}]}
```

### sharegpt.load_guanaco
## sharegpt.load_guanaco

conversations where `from` is `prompter` `assistant` instead of default sharegpt

```{.json filename="data.jsonl"}
{"conversations": [{"from": "...", "value": "..."}]}
```

### sharegpt_jokes
## sharegpt_jokes

creates a chat where bot is asked to tell a joke, then explain why the joke is funny

```{.json filename="data.jsonl"}
{"conversations": [{"title": "...", "text": "...", "explanation": "..."}]}
```

## How to add custom prompts for instruction-tuning

For a dataset that is preprocessed for instruction purposes:

```{.json filename="data.jsonl"}
{"input": "...", "output": "..."}
```

You can use this example in your YAML config:

```{.yaml filename="config.yaml"}
datasets:
- path: repo
type:
system_prompt: ""
field_system: system
field_instruction: input
field_output: output
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
```

See full config options under [here](../docs/config.qmd).
24 changes: 24 additions & 0 deletions docs/dataset-formats/inst_tune.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -163,3 +163,27 @@ instruction, adds additional eos tokens
```{.json filename="data.jsonl"}
{"prompt": "...", "generation": "..."}
```

## How to add custom prompt format

For a dataset that is preprocessed for instruction purposes:

```{.json filename="data.jsonl"}
{"input": "...", "output": "..."}
```

You can use this example in your YAML config:

```{.yaml filename="config.yaml"}
datasets:
- path: repo
type:
system_prompt: ""
field_system: system
field_instruction: input
field_output: output
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
```

See full config options under [here](../config.qmd).
2 changes: 1 addition & 1 deletion docs/dataset-formats/pretraining.qmd
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Pre-training
description: Data format for a pre-training completion task.
order: 3
order: 1
---

For pretraining, there is no prompt template or roles. The only required field is `text`:
Expand Down
2 changes: 1 addition & 1 deletion docs/input_output.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ labels so that your model can focus on predicting the outputs only.
### You may not want prompt templates

However, there are many situations where you don't want to use one of
these formats or templates (I usually don't!). This is because they can:
these formats or templates. This is because they can:

- Add unnecessary boilerplate to your prompts.
- Create artifacts like special delimiters `<|im_start|>` that can
Expand Down

0 comments on commit c2b64e4

Please sign in to comment.