-
-
Notifications
You must be signed in to change notification settings - Fork 896
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* feat: update doc contents * chore: move batch vs ga docs * feat: update lambdalabs instructions * fix: refactor dev instructions
- Loading branch information
1 parent
5760099
commit c2b64e4
Showing
6 changed files
with
116 additions
and
113 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
--- | ||
title: Batch size vs Gradient accumulation | ||
description: Understanding of batch size and gradient accumulation steps | ||
--- | ||
|
||
Gradient accumulation means accumulating gradients over several mini-batches and updating the model weights afterward. When the samples in each batch are diverse, this technique doesn't significantly impact learning. | ||
|
||
This method allows for effective training with larger effective batch sizes without needing proportionally larger memory. Here's why: | ||
|
||
1. **Memory Consumption with Batch Size**: The primary reason increasing the batch size impacts memory is due to the storage requirements for intermediate activations. When you forward propagate a batch through a network, you have to store the activations at each layer for each sample in the batch, because these activations are used during backpropagation to compute gradients. Therefore, larger batches mean more activations, leading to greater GPU memory consumption. | ||
|
||
2. **Gradient Accumulation**: With gradient accumulation, you're effectively simulating a larger batch size by accumulating gradients over several smaller batches (or micro-batches). However, at any given time, you're only forward and backward propagating a micro-batch. This means you only store activations for the micro-batch, not the full accumulated batch. As a result, you can simulate the effect of a larger batch size without the memory cost of storing activations for a large batch. | ||
|
||
**Example 1:** | ||
Micro batch size: 3 | ||
Gradient accumulation steps: 2 | ||
Number of GPUs: 3 | ||
Total batch size = 3 * 2 * 3 = 18 | ||
|
||
``` | ||
| GPU 1 | GPU 2 | GPU 3 | | ||
|----------------|----------------|----------------| | ||
| S1, S2, S3 | S4, S5, S6 | S7, S8, S9 | | ||
| e1, e2, e3 | e4, e5, e6 | e7, e8, e9 | | ||
|----------------|----------------|----------------| | ||
| → (accumulate) | → (accumulate) | → (accumulate) | | ||
|----------------|----------------|----------------| | ||
| S10, S11, S12 | S13, S14, S15 | S16, S17, S18 | | ||
| e10, e11, e12 | e13, e14, e15 | e16, e17, e18 | | ||
|----------------|----------------|----------------| | ||
| → (apply) | → (apply) | → (apply) | | ||
Accumulated gradient for the weight w1 after the second iteration (considering all GPUs): | ||
Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6 + e7 + e8 + e9 + e10 + e11 + e12 + e13 + e14 + e15 + e16 + e17 + e18 | ||
Weight update for w1: | ||
w1_new = w1_old - learning rate x (Total gradient for w1 / 18) | ||
``` | ||
|
||
**Example 2:** | ||
Micro batch size: 2 | ||
Gradient accumulation steps: 1 | ||
Number of GPUs: 3 | ||
Total batch size = 2 * 1 * 3 = 6 | ||
|
||
``` | ||
| GPU 1 | GPU 2 | GPU 3 | | ||
|-----------|-----------|-----------| | ||
| S1, S2 | S3, S4 | S5, S6 | | ||
| e1, e2 | e3, e4 | e5, e6 | | ||
|-----------|-----------|-----------| | ||
| → (apply) | → (apply) | → (apply) | | ||
Accumulated gradient for the weight w1 (considering all GPUs): | ||
Total gradient for w1 = e1 + e2 + e3 + e4 + e5 + e6 | ||
Weight update for w1: | ||
w1_new = w1_old - learning rate × (Total gradient for w1 / 6) | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,71 +1,63 @@ | ||
--- | ||
title: Conversation | ||
description: Conversation format for supervised fine-tuning. | ||
order: 1 | ||
order: 3 | ||
--- | ||
|
||
## Formats | ||
|
||
### sharegpt | ||
## sharegpt | ||
|
||
conversations where `from` is `human`/`gpt`. (optional: first row with role `system` to override default system prompt) | ||
|
||
```{.json filename="data.jsonl"} | ||
{"conversations": [{"from": "...", "value": "..."}]} | ||
``` | ||
|
||
Note: `type: sharegpt` opens a special config `conversation:` that enables conversions to many Conversation types. See [the docs](../docs/config.qmd) for all config options. | ||
Note: `type: sharegpt` opens special configs: | ||
- `conversation`: enables conversions to many Conversation types. Refer to the 'name' [here](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py) for options. | ||
- `roles`: allows you to specify the roles for input and output. This is useful for datasets with custom roles such as `tool` etc to support masking. | ||
- `field_human`: specify the key to use instead of `human` in the conversation. | ||
- `field_model`: specify the key to use instead of `gpt` in the conversation. | ||
|
||
```yaml | ||
datasets: | ||
path: ... | ||
type: sharegpt | ||
|
||
conversation: # Options (see Conversation 'name'): https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py | ||
field_human: # Optional[str]. Human key to use for conversation. | ||
field_model: # Optional[str]. Assistant key to use for conversation. | ||
# Add additional keys from your dataset as input or output roles | ||
roles: | ||
input: # Optional[List[str]]. These will be masked based on train_on_input | ||
output: # Optional[List[str]]. | ||
``` | ||
### pygmalion | ||
## pygmalion | ||
```{.json filename="data.jsonl"} | ||
{"conversations": [{"role": "...", "value": "..."}]} | ||
``` | ||
|
||
### sharegpt.load_role | ||
## sharegpt.load_role | ||
|
||
conversations where `role` is used instead of `from` | ||
|
||
```{.json filename="data.jsonl"} | ||
{"conversations": [{"role": "...", "value": "..."}]} | ||
``` | ||
|
||
### sharegpt.load_guanaco | ||
## sharegpt.load_guanaco | ||
|
||
conversations where `from` is `prompter` `assistant` instead of default sharegpt | ||
|
||
```{.json filename="data.jsonl"} | ||
{"conversations": [{"from": "...", "value": "..."}]} | ||
``` | ||
|
||
### sharegpt_jokes | ||
## sharegpt_jokes | ||
|
||
creates a chat where bot is asked to tell a joke, then explain why the joke is funny | ||
|
||
```{.json filename="data.jsonl"} | ||
{"conversations": [{"title": "...", "text": "...", "explanation": "..."}]} | ||
``` | ||
|
||
## How to add custom prompts for instruction-tuning | ||
|
||
For a dataset that is preprocessed for instruction purposes: | ||
|
||
```{.json filename="data.jsonl"} | ||
{"input": "...", "output": "..."} | ||
``` | ||
|
||
You can use this example in your YAML config: | ||
|
||
```{.yaml filename="config.yaml"} | ||
datasets: | ||
- path: repo | ||
type: | ||
system_prompt: "" | ||
field_system: system | ||
field_instruction: input | ||
field_output: output | ||
format: "[INST] {instruction} [/INST]" | ||
no_input_format: "[INST] {instruction} [/INST]" | ||
``` | ||
|
||
See full config options under [here](../docs/config.qmd). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters