diff --git a/README.md b/README.md
index 027c476c3b..ad5fa379a0 100644
--- a/README.md
+++ b/README.md
@@ -35,13 +35,12 @@ Features:
- [Google Colab](#google-colab)
- [Launching on public clouds via SkyPilot](#launching-on-public-clouds-via-skypilot)
- [Dataset](#dataset)
- - [How to Add Custom Prompts](#how-to-add-custom-prompts)
- - [How to Use Custom Pretokenized Dataset](#how-to-use-your-custom-pretokenized-dataset)
- [Config](#config)
- [Train](#train)
- [Inference](#inference-playground)
- [Merge LORA to Base](#merge-lora-to-base)
- [Special Tokens](#special-tokens)
+ - [All Config Options](#all-config-options)
- Advanced Topics
- [Multipack](./docs/multipack.qmd)
- [RLHF & DPO](./docs/rlhf.qmd)
@@ -299,186 +298,9 @@ HF_TOKEN=xx BUCKET= sky spot launch axolotl-spot.yaml --env HF_TOKE
### Dataset
-Axolotl supports a variety of dataset formats. Below are some of the formats you can use.
-Have dataset(s) in one of the following format (JSONL recommended):
+Axolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.
-#### Pretraining
-
-- `completion`: raw corpus
- ```json
- {"text": "..."}
- ```
-
-Note: Axolotl usually loads the entire dataset into memory. This will be challenging for large datasets. Use the following config to enable streaming:
-
-```yaml
-pretraining_dataset: # hf path only
-```
-
-#### Supervised finetuning
-
-##### Instruction
-
-- `alpaca`: instruction; input(optional)
- ```json
- {"instruction": "...", "input": "...", "output": "..."}
- ```
-
-
-
-See other formats
-
-- `jeopardy`: question and answer
- ```json
- {"question": "...", "category": "...", "answer": "..."}
- ```
-- `oasst`: instruction
- ```json
- {"INSTRUCTION": "...", "RESPONSE": "..."}
- ```
-- `gpteacher`: instruction; input(optional)
- ```json
- {"instruction": "...", "input": "...", "response": "..."}
- ```
-- `reflection`: instruction with reflect; input(optional)
- ```json
- {"instruction": "...", "input": "...", "output": "...", "reflection": "...", "corrected": "..."}
- ```
-- `explainchoice`: question, choices, (solution OR explanation)
- ```json
- {"question": "...", "choices": ["..."], "solution": "...", "explanation": "..."}
- ```
-- `concisechoice`: question, choices, (solution OR explanation)
- ```json
- {"question": "...", "choices": ["..."], "solution": "...", "explanation": "..."}
- ```
-- `summarizetldr`: article and summary
- ```json
- {"article": "...", "summary": "..."}
- ```
-- `alpaca_chat`: basic instruct for alpaca chat
- ```json
- {"instruction": "...", "input": "...", "response": "..."}
- ```
-- `alpaca_chat.load_qa`: question and answer for alpaca chat
- ```json
- {"question": "...", "answer": "..."}
- ```
-- `alpaca_chat.load_concise`: question and answer for alpaca chat, for concise answers
- ```json
- {"instruction": "...", "input": "...", "response": "..."}
- ```
-- `alpaca_chat.load_camel_ai`: question and answer for alpaca chat, for load_camel_ai
- ```json
- {"message_1": "...", "message_2": "..."}
- ```
-- `alpaca_w_system.load_open_orca`: support for open orca datasets with included system prompts, instruct
- ```json
- {"system_prompt": "...", "question": "...", "response": "..."}
- ```
-- `context_qa`: in context question answering from an article
- ```json
- {"article": "...", "question": "...", "answer": "..."}
- ```
-- `context_qa.load_v2`: in context question answering (alternate)
- ```json
- {"context": "...", "question": "...", "answer": "..."}
- ```
-- `context_qa.load_404`: in context question answering from an article, with default response for no answer from context
- ```json
- {"article": "...", "unanswerable_question": "..."}
- ```
-- `creative_acr.load_answer`: instruction and revision
- ```json
- {"instruction": "...", "revision": "..."}
- ```
-- `creative_acr.load_critique`: critique
- ```json
- {"scores": "...", "critiques": "...", "instruction": "...", "answer": "..."}
- ```
-- `creative_acr.load_revise`: critique and revise
- ```json
- {"scores": "...", "critiques": "...", "instruction": "...", "answer": "...", "revision": "..."}
- ```
-- `metharme`: instruction, adds additional eos tokens
- ```json
- {"prompt": "...", "generation": "..."}
- ```
-
-
-
-##### Template-Free
-
-- `input_output`: template-free prompt construction
- ```json
- {"segments": [{"label": true|false, "text": "..."}]}
- ```
-
-This is a special format that allows you to construct prompts without using templates. This is for advanced users who want more freedom with prompt construction. See [these docs](docs/input_output.qmd) for more details.
-
-##### Conversation
-
-- `sharegpt`: conversations where `from` is `human`/`gpt`. (optional: first row with role `system` to override default system prompt)
- ```json
- {"conversations": [{"from": "...", "value": "..."}]}
- ```
-
-
-
-See other formats
-
-- `pygmalion`: pygmalion
- ```json
- {"conversations": [{"role": "...", "value": "..."}]}
- ```
-- `sharegpt.load_role`: conversations where `role` is used instead of `from`
- ```json
- {"conversations": [{"role": "...", "value": "..."}]}
- ```
-- `sharegpt.load_guanaco`: conversations where `from` is `prompter`/`assistant` instead of default sharegpt
- ```json
- {"conversations": [{"from": "...", "value": "..."}]}
- ```
-- `sharegpt_jokes`: creates a chat where bot is asked to tell a joke, then explain why the joke is funny
- ```json
- {"conversations": [{"title": "...", "text": "...", "explanation": "..."}]}
- ```
-
-
-
-Note: `type: sharegpt` opens a special config `conversation:` that enables conversions to many Conversation types. See dataset section under [all yaml options](#all-yaml-options).
-
-#### How to add custom prompts
-
-For a dataset that is preprocessed for instruction purposes:
-
-```json
-{"input": "...", "output": "..."}
-```
-
-You can use this example in your YAML config:
-
-```yaml
-datasets:
- - path: repo
- type:
- system_prompt: ""
- field_system: system
- field_instruction: input
- field_output: output
- format: "[INST] {instruction} [/INST]"
- no_input_format: "[INST] {instruction} [/INST]"
-```
-See full config options under [all yaml options](#all-yaml-options).
-
-#### How to use your custom pretokenized dataset
-
-- Do not pass a `type:`
-- Columns in Dataset must be exactly `input_ids`, `attention_mask`, `labels`
-
-```yaml
-- path: ...
-```
+See [these docs](https://openaccess-ai-collective.github.io/axolotl/docs/dataset-formats/) for more information on how to use different dataset formats.
### Config
@@ -563,452 +385,9 @@ See [examples](examples) for quick start. It is recommended to duplicate and mod
- v_proj
```
-
+#### All Config Options
-All yaml options (click to expand)
-
-```yaml
-# This is the huggingface model that contains *.pt, *.safetensors, or *.bin files
-# This can also be a relative path to a model on disk
-base_model: ./llama-7b-hf
-# You can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)
-base_model_ignore_patterns:
-# If the base_model repo on hf hub doesn't include configuration .json files,
-# You can set that here, or leave this empty to default to base_model
-base_model_config: ./llama-7b-hf
-# You can specify to choose a specific model revision from huggingface hub
-revision_of_model:
-# Optional tokenizer configuration path in case you want to use a different tokenizer
-# than the one defined in the base model
-tokenizer_config:
-# If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too
-model_type: AutoModelForCausalLM
-# Corresponding tokenizer for the model AutoTokenizer is a good choice
-tokenizer_type: AutoTokenizer
-# Trust remote code for untrusted source
-trust_remote_code:
-# use_fast option for tokenizer loading from_pretrained, default to True
-tokenizer_use_fast:
-# Whether to use the legacy tokenizer setting, defaults to True
-tokenizer_legacy:
-# Resize the model embeddings when new tokens are added to multiples of 32
-# This is reported to improve training speed on some models
-resize_token_embeddings_to_32x:
-
-# (Internal use only)
-# Used to identify which the model is based on
-is_falcon_derived_model:
-is_llama_derived_model:
-is_qwen_derived_model:
-# Please note that if you set this to true, `padding_side` will be set to "left" by default
-is_mistral_derived_model:
-
-# optional overrides to the base model configuration
-overrides_of_model_config:
- # RoPE Scaling https://github.com/huggingface/transformers/pull/24653
- rope_scaling:
- type: # linear | dynamic
- factor: # float
-
-# optional overrides to the bnb 4bit quantization configuration
-# https://huggingface.co/docs/transformers/main/main_classes/quantization#transformers.BitsAndBytesConfig
-bnb_config_kwargs:
- # These are default values
- llm_int8_has_fp16_weight: false
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: true
-
-
-# Whether you are training a 4-bit GPTQ quantized model
-gptq: true
-
-# This will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer
-load_in_8bit: true
-# Use bitsandbytes 4 bit
-load_in_4bit:
-
-# Use CUDA bf16
-bf16: true # bool or 'full' for `bf16_full_eval`. require >=ampere
-# Use CUDA fp16
-fp16: true
-# Use CUDA tf32
-tf32: true # require >=ampere
-
-# No AMP (automatic mixed precision)
-bfloat16: true # require >=ampere
-float16: true
-
-# Limit the memory for all available GPUs to this amount (if an integer, expressed in gigabytes); default: unset
-gpu_memory_limit: 20GiB
-# Do the LoRA/PEFT loading on CPU -- this is required if the base model is so large it takes up most or all of the available GPU VRAM, e.g. during a model and LoRA merge
-lora_on_cpu: true
-
-# A list of one or more datasets to finetune the model with
-datasets:
- # HuggingFace dataset repo | s3://,gs:// path | "json" for local dataset, make sure to fill data_files
- - path: vicgalle/alpaca-gpt4
- # The type of prompt to use for training. [alpaca, sharegpt, gpteacher, oasst, reflection]
- type: alpaca # format | format: (chat/instruct) | .load_
- ds_type: # Optional[str] (json|arrow|parquet|text|csv) defines the datatype when path is a file
- data_files: # Optional[str] path to source data files
- shards: # Optional[int] number of shards to split data into
- name: # Optional[str] name of dataset configuration to load
- train_on_split: train # Optional[str] name of dataset split to load from
-
- # Optional[str] fastchat conversation type, only used with type: sharegpt
- conversation: # Options (see Conversation 'name'): https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py
- field_human: # Optional[str]. Human key to use for conversation.
- field_model: # Optional[str]. Assistant key to use for conversation.
- # Add additional keys from your dataset as input or output roles
- roles:
- input: # Optional[List[str]]. These will be masked based on train_on_input
- output: # Optional[List[str]].
-
- # Custom user instruction prompt
- - path: repo
- type:
- # The below are defaults. only set what's needed if you use a different column name.
- system_prompt: ""
- system_format: "{system}"
- field_system: system
- field_instruction: instruction
- field_input: input
- field_output: output
-
- # Customizable to be single line or multi-line
- # Use {instruction}/{input} as key to be replaced
- # 'format' can include {input}
- format: |-
- User: {instruction} {input}
- Assistant:
- # 'no_input_format' cannot include {input}
- no_input_format: "{instruction} "
-
- # For `completion` datsets only, uses the provided field instead of `text` column
- field:
-
-# If false, the datasets will not be shuffled and will keep their original order in `datasets`.
-# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true.
-shuffle_merged_datasets: true
-
-# A list of one or more datasets to eval the model with.
-# You can use either test_datasets, or val_set_size, but not both.
-test_datasets:
- - path: /workspace/data/eval.jsonl
- ds_type: json
- # You need to specify a split. For "json" datasets the default split is called "train".
- split: train
- type: completion
- data_files:
- - /workspace/data/eval.jsonl
-
-# use RL training: 'dpo', 'ipo', 'kto_pair'
-rl:
-
-# Saves the desired chat template to the tokenizer_config.json for easier inferencing
-# Currently supports chatml and inst (mistral/mixtral)
-chat_template: chatml
-# Changes the default system message
-default_system_message: You are a helpful assistant. Please give a long and detailed answer. # Currently only supports chatml.
-# Axolotl attempts to save the dataset as an arrow after packing the data together so
-# subsequent training attempts load faster, relative path
-dataset_prepared_path: data/last_run_prepared
-# Push prepared dataset to hub
-push_dataset_to_hub: # repo path
-# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()`
-# if not set.
-dataset_processes: # defaults to os.cpu_count() if not set
-# Keep dataset in memory while preprocessing
-# Only needed if cached dataset is taking too much storage
-dataset_keep_in_memory:
-# push checkpoints to hub
-hub_model_id: # private repo path to push finetuned model
-# how to push checkpoints to hub
-# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy
-hub_strategy:
-# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets
-# Required to be true when used in combination with `push_dataset_to_hub`
-hf_use_auth_token: # boolean
-# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval.
-val_set_size: 0.04
-# Num shards for whole dataset
-dataset_shard_num:
-# Index of shard to use for whole dataset
-dataset_shard_idx:
-
-# The maximum length of an input to train with, this should typically be less than 2048
-# as most models have a token/context limit of 2048
-sequence_len: 2048
-# Pad inputs so each step uses constant sized buffers
-# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently
-pad_to_sequence_len:
-# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true'
-sample_packing:
-# Set to 'false' if getting errors during eval with sample_packing on.
-eval_sample_packing:
-# You can set these packing optimizations AFTER starting a training at least once.
-# The trainer will provide recommended values for these values.
-sample_packing_eff_est:
-total_num_tokens:
-
-# Passed through to transformers when loading the model when launched without accelerate
-# Use `sequential` when training w/ model parallelism to limit memory
-device_map:
-# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model.
-max_memory:
-
-# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model
-adapter: lora
-# If you already have a lora model trained that you want to load, put that here.
-# This means after training, if you want to test the model, you should set this to the value of `output_dir`.
-# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`.
-lora_model_dir:
-
-# LoRA hyperparameters
-# For more details about the following options, see:
-# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2
-lora_r: 8
-lora_alpha: 16
-lora_dropout: 0.05
-lora_target_modules:
- - q_proj
- - v_proj
-# - k_proj
-# - o_proj
-# - gate_proj
-# - down_proj
-# - up_proj
-lora_target_linear: # If true, will target all linear modules
-peft_layers_to_transform: # The layer indices to transform, otherwise, apply to all layers
-
-# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.
-# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models.
-# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities.
-# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994
-lora_modules_to_save:
-# - embed_tokens
-# - lm_head
-
-lora_fan_in_fan_out: false
-
-peft:
- # Configuration options for loftq initialization for LoRA
- # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization
- loftq_config:
- loftq_bits: # typically 4 bits
-
-# ReLoRA configuration
-# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed
-relora_steps: # Number of steps per ReLoRA restart
-relora_warmup_steps: # Number of per-restart warmup steps
-relora_anneal_steps: # Number of anneal steps for each relora cycle
-relora_prune_ratio: # threshold for optimizer magnitude when pruning
-relora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings
-
-# wandb configuration if you're using it
-# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.
-wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb
-wandb_project: # Your wandb project name
-wandb_entity: # A wandb Team name if using a Team
-wandb_watch:
-wandb_name: # Set the name of your wandb run
-wandb_run_id: # Set the ID of your wandb run
-wandb_log_model: # "checkpoint" to log model to wandb Artifacts every `save_steps` or "end" to log only at the end of training
-
-# mlflow configuration if you're using it
-mlflow_tracking_uri: # URI to mlflow
-mlflow_experiment_name: # Your experiment name
-hf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry
-
-# Where to save the full-finetuned model to
-output_dir: ./completed-model
-
-# Whether to use torch.compile and which backend to use
-torch_compile: # bool
-torch_compile_backend: # Optional[str]
-
-# Training hyperparameters
-
-# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps.
-gradient_accumulation_steps: 1
-# The number of samples to include in each batch. This is the number of samples sent to each GPU.
-micro_batch_size: 2
-eval_batch_size:
-num_epochs: 4
-warmup_steps: 100 # cannot use with warmup_ratio
-warmup_ratio: 0.05 # cannot use with warmup_steps
-learning_rate: 0.00003
-lr_quadratic_warmup:
-logging_steps:
-eval_steps: # Leave empty to eval at each epoch, integers for every N steps. decimal for fraction of total steps
-evals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps
-save_strategy: # Set to `no` to skip checkpoint saves
-save_steps: # Leave empty to save at each epoch
-saves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps
-save_total_limit: # Checkpoints saved at a time
-# Maximum number of iterations to train for. It precedes num_epochs which means that
-# if both are set, num_epochs will not be guaranteed.
-# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps
-max_steps:
-
-eval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0
-eval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128
-eval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is ["sacrebleu", "comet", "ter", chrf]
-
-loss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)
-loss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3)
-
-# Save model as safetensors (require safetensors package)
-save_safetensors:
-
-# Whether to mask out or include the human's prompt from the training labels
-train_on_inputs: false
-# Group similarly sized data to minimize padding.
-# May be slower to start, as it must download and sort the entire dataset.
-# Note that training loss may have an oscillating pattern with this enabled.
-group_by_length: false
-
-# Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing
-gradient_checkpointing: false
-# additional kwargs to pass to the trainer for gradient checkpointing
-# gradient_checkpointing_kwargs:
-# use_reentrant: true
-
-# Stop training after this many evaluation losses have increased in a row
-# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback
-early_stopping_patience: 3
-
-# Specify a scheduler and kwargs to use with the optimizer
-lr_scheduler: # 'one_cycle' | 'log_sweep' | empty for cosine
-lr_scheduler_kwargs:
-cosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr
-cosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)
-
-# For one_cycle optim
-lr_div_factor: # Learning rate div factor
-
-# Specify optimizer
-# Valid values are driven by the Transformers OptimizerNames class, see:
-# https://github.com/huggingface/transformers/blob/95b374952dc27d8511541d6f5a4e22c9ec11fb24/src/transformers/training_args.py#L134
-#
-# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of
-# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used
-# in the examples/ for your model and fine-tuning use case.
-#
-# Valid values for 'optimizer' include:
-# - adamw_hf
-# - adamw_torch
-# - adamw_torch_fused
-# - adamw_torch_xla
-# - adamw_apex_fused
-# - adafactor
-# - adamw_anyprecision
-# - sgd
-# - adagrad
-# - adamw_bnb_8bit
-# - lion_8bit
-# - lion_32bit
-# - paged_adamw_32bit
-# - paged_adamw_8bit
-# - paged_lion_32bit
-# - paged_lion_8bit
-# - galore_adamw
-# - galore_adamw_8bit
-# - galore_adafactor
-# - galore_adamw_layerwise
-# - galore_adamw_8bit_layerwise
-# - galore_adafactor_layerwise
-optimizer:
-# Dictionary of arguments to pass to the optimizer
-optim_args:
-# For Galore Optimizers the following optim_args are available
-# rank: # type: int
-# update_proj_gap # type: int
-# scale # type: float
-# proj_type: # type: str, default = std
-
-# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm
-optim_target_modules:
-# - self_attn # for llama
-# - mlp
-
-# Specify weight decay
-weight_decay:
-# adamw hyperparams
-adam_beta1:
-adam_beta2:
-adam_epsilon:
-# Gradient clipping max norm
-max_grad_norm:
-
-# Augmentation techniques
-# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings
-# currently only supported on Llama and Mistral
-neftune_noise_alpha:
-
-# Whether to bettertransformers
-flash_optimum:
-# Whether to use xformers attention patch https://github.com/facebookresearch/xformers:
-xformers_attention:
-# Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:
-flash_attention:
-flash_attn_cross_entropy: # Whether to use flash-attention cross entropy implementation - advanced use only
-flash_attn_rms_norm: # Whether to use flash-attention rms norm implementation - advanced use only
-flash_attn_fuse_qkv: # Whether to fuse QKV into a single operation
-flash_attn_fuse_mlp: # Whether to fuse part of the MLP into a single operation
-# Whether to use scaled-dot-product attention
-# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html
-sdp_attention:
-# Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf
-s2_attention:
-# Resume from a specific checkpoint dir
-resume_from_checkpoint:
-# If resume_from_checkpoint isn't set and you simply want it to start where it left off.
-# Be careful with this being turned on between different models.
-auto_resume_from_checkpoints: false
-
-# Don't mess with this, it's here for accelerate and torchrun
-local_rank:
-
-# Add or change special tokens.
-# If you add tokens here, you don't need to add them to the `tokens` list.
-special_tokens:
- # bos_token: ""
- # eos_token: ""
- # unk_token: ""
-
-# Add extra tokens.
-tokens:
-
-# FSDP
-fsdp:
-fsdp_config:
-
-# Deepspeed config path. e.g., deepspeed_configs/zero3.json
-deepspeed:
-
-# Advanced DDP Arguments
-ddp_timeout:
-ddp_bucket_cap_mb:
-ddp_broadcast_buffers:
-
-# Path to torch distx for optim 'adamw_anyprecision'
-torchdistx_path:
-
-# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize
-pretraining_dataset:
-
-# Debug mode
-debug:
-
-# Seed
-seed:
-
-# Allow overwrite yml config using from cli
-strict:
-```
-
-
+See [these docs](docs/config.qmd) for all config options.
Understanding of batch size and gradient accumulation steps
diff --git a/_quarto.yml b/_quarto.yml
index 31aa90398e..749f68cce6 100644
--- a/_quarto.yml
+++ b/_quarto.yml
@@ -30,20 +30,20 @@ website:
# TODO Edit folder structure after we have more docs.
- docs/debugging.qmd
- docs/multipack.qmd
- - docs/fdsp_qlora.qmd
+ - docs/fsdp_qlora.qmd
- docs/input_output.qmd
- docs/rlhf.qmd
- docs/nccl.qmd
- docs/mac.qmd
- docs/multi-node.qmd
+ - section: "Dataset Formats"
+ contents: docs/dataset-formats/*
- section: "Reference"
contents:
- docs/config.qmd
- docs/faq.qmd
-
-
format:
html:
theme: materia
diff --git a/docs/config.qmd b/docs/config.qmd
index d93b170e7b..e2ea778603 100644
--- a/docs/config.qmd
+++ b/docs/config.qmd
@@ -3,15 +3,443 @@ title: Config options
description: A complete list of all configuration options.
---
-```{python}
-#|echo: false
-#|output: asis
-import re
-# Regex pattern to match the YAML block including its code fence
-pattern = r']*id="all-yaml-options"[^>]*>.*?All yaml options.*?```yaml(.*?)```.*?'
-
-with open('../README.md', 'r') as f:
- doc = f.read()
-match = re.search(pattern, doc, re.DOTALL)
-print("```yaml", match.group(1).strip(), "```", sep="\n")
+```yaml
+# This is the huggingface model that contains *.pt, *.safetensors, or *.bin files
+# This can also be a relative path to a model on disk
+base_model: ./llama-7b-hf
+# You can specify an ignore pattern if the model repo contains more than 1 model type (*.pt, etc)
+base_model_ignore_patterns:
+# If the base_model repo on hf hub doesn't include configuration .json files,
+# You can set that here, or leave this empty to default to base_model
+base_model_config: ./llama-7b-hf
+# You can specify to choose a specific model revision from huggingface hub
+revision_of_model:
+# Optional tokenizer configuration path in case you want to use a different tokenizer
+# than the one defined in the base model
+tokenizer_config:
+# If you want to specify the type of model to load, AutoModelForCausalLM is a good choice too
+model_type: AutoModelForCausalLM
+# Corresponding tokenizer for the model AutoTokenizer is a good choice
+tokenizer_type: AutoTokenizer
+# Trust remote code for untrusted source
+trust_remote_code:
+# use_fast option for tokenizer loading from_pretrained, default to True
+tokenizer_use_fast:
+# Whether to use the legacy tokenizer setting, defaults to True
+tokenizer_legacy:
+# Resize the model embeddings when new tokens are added to multiples of 32
+# This is reported to improve training speed on some models
+resize_token_embeddings_to_32x:
+
+# (Internal use only)
+# Used to identify which the model is based on
+is_falcon_derived_model:
+is_llama_derived_model:
+is_qwen_derived_model:
+# Please note that if you set this to true, `padding_side` will be set to "left" by default
+is_mistral_derived_model:
+
+# optional overrides to the base model configuration
+overrides_of_model_config:
+ # RoPE Scaling https://github.com/huggingface/transformers/pull/24653
+ rope_scaling:
+ type: # linear | dynamic
+ factor: # float
+
+# optional overrides to the bnb 4bit quantization configuration
+# https://huggingface.co/docs/transformers/main/main_classes/quantization#transformers.BitsAndBytesConfig
+bnb_config_kwargs:
+ # These are default values
+ llm_int8_has_fp16_weight: false
+ bnb_4bit_quant_type: nf4
+ bnb_4bit_use_double_quant: true
+
+
+# Whether you are training a 4-bit GPTQ quantized model
+gptq: true
+
+# This will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer
+load_in_8bit: true
+# Use bitsandbytes 4 bit
+load_in_4bit:
+
+# Use CUDA bf16
+bf16: true # bool or 'full' for `bf16_full_eval`. require >=ampere
+# Use CUDA fp16
+fp16: true
+# Use CUDA tf32
+tf32: true # require >=ampere
+
+# No AMP (automatic mixed precision)
+bfloat16: true # require >=ampere
+float16: true
+
+# Limit the memory for all available GPUs to this amount (if an integer, expressed in gigabytes); default: unset
+gpu_memory_limit: 20GiB
+# Do the LoRA/PEFT loading on CPU -- this is required if the base model is so large it takes up most or all of the available GPU VRAM, e.g. during a model and LoRA merge
+lora_on_cpu: true
+
+# A list of one or more datasets to finetune the model with
+datasets:
+ # HuggingFace dataset repo | s3://,gs:// path | "json" for local dataset, make sure to fill data_files
+ - path: vicgalle/alpaca-gpt4
+ # The type of prompt to use for training. [alpaca, sharegpt, gpteacher, oasst, reflection]
+ type: alpaca # format | format: (chat/instruct) | .load_
+ ds_type: # Optional[str] (json|arrow|parquet|text|csv) defines the datatype when path is a file
+ data_files: # Optional[str] path to source data files
+ shards: # Optional[int] number of shards to split data into
+ name: # Optional[str] name of dataset configuration to load
+ train_on_split: train # Optional[str] name of dataset split to load from
+
+ # Optional[str] fastchat conversation type, only used with type: sharegpt
+ conversation: # Options (see Conversation 'name'): https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py
+ field_human: # Optional[str]. Human key to use for conversation.
+ field_model: # Optional[str]. Assistant key to use for conversation.
+ # Add additional keys from your dataset as input or output roles
+ roles:
+ input: # Optional[List[str]]. These will be masked based on train_on_input
+ output: # Optional[List[str]].
+
+ # Custom user instruction prompt
+ - path: repo
+ type:
+ # The below are defaults. only set what's needed if you use a different column name.
+ system_prompt: ""
+ system_format: "{system}"
+ field_system: system
+ field_instruction: instruction
+ field_input: input
+ field_output: output
+
+ # Customizable to be single line or multi-line
+ # Use {instruction}/{input} as key to be replaced
+ # 'format' can include {input}
+ format: |-
+ User: {instruction} {input}
+ Assistant:
+ # 'no_input_format' cannot include {input}
+ no_input_format: "{instruction} "
+
+ # For `completion` datsets only, uses the provided field instead of `text` column
+ field:
+
+# If false, the datasets will not be shuffled and will keep their original order in `datasets`.
+# The same applies to the `test_datasets` option and the `pretraining_dataset` option. Default is true.
+shuffle_merged_datasets: true
+
+# A list of one or more datasets to eval the model with.
+# You can use either test_datasets, or val_set_size, but not both.
+test_datasets:
+ - path: /workspace/data/eval.jsonl
+ ds_type: json
+ # You need to specify a split. For "json" datasets the default split is called "train".
+ split: train
+ type: completion
+ data_files:
+ - /workspace/data/eval.jsonl
+
+# use RL training: 'dpo', 'ipo', 'kto_pair'
+rl:
+
+# Saves the desired chat template to the tokenizer_config.json for easier inferencing
+# Currently supports chatml and inst (mistral/mixtral)
+chat_template: chatml
+# Changes the default system message
+default_system_message: You are a helpful assistant. Please give a long and detailed answer. # Currently only supports chatml.
+# Axolotl attempts to save the dataset as an arrow after packing the data together so
+# subsequent training attempts load faster, relative path
+dataset_prepared_path: data/last_run_prepared
+# Push prepared dataset to hub
+push_dataset_to_hub: # repo path
+# The maximum number of processes to use while preprocessing your input dataset. This defaults to `os.cpu_count()`
+# if not set.
+dataset_processes: # defaults to os.cpu_count() if not set
+# Keep dataset in memory while preprocessing
+# Only needed if cached dataset is taking too much storage
+dataset_keep_in_memory:
+# push checkpoints to hub
+hub_model_id: # private repo path to push finetuned model
+# how to push checkpoints to hub
+# https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy
+hub_strategy:
+# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets
+# Required to be true when used in combination with `push_dataset_to_hub`
+hf_use_auth_token: # boolean
+# How much of the dataset to set aside as evaluation. 1 = 100%, 0.50 = 50%, etc. 0 for no eval.
+val_set_size: 0.04
+# Num shards for whole dataset
+dataset_shard_num:
+# Index of shard to use for whole dataset
+dataset_shard_idx:
+
+# The maximum length of an input to train with, this should typically be less than 2048
+# as most models have a token/context limit of 2048
+sequence_len: 2048
+# Pad inputs so each step uses constant sized buffers
+# This will reduce memory fragmentation and may prevent OOMs, by re-using memory more efficiently
+pad_to_sequence_len:
+# Use efficient multi-packing with block diagonal attention and per sequence position_ids. Recommend set to 'true'
+sample_packing:
+# Set to 'false' if getting errors during eval with sample_packing on.
+eval_sample_packing:
+# You can set these packing optimizations AFTER starting a training at least once.
+# The trainer will provide recommended values for these values.
+sample_packing_eff_est:
+total_num_tokens:
+
+# Passed through to transformers when loading the model when launched without accelerate
+# Use `sequential` when training w/ model parallelism to limit memory
+device_map:
+# Defines the max memory usage per gpu on the system. Passed through to transformers when loading the model.
+max_memory:
+
+# If you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model
+adapter: lora
+# If you already have a lora model trained that you want to load, put that here.
+# This means after training, if you want to test the model, you should set this to the value of `output_dir`.
+# Note that if you merge an adapter to the base model, a new subdirectory `merged` will be created under the `output_dir`.
+lora_model_dir:
+
+# LoRA hyperparameters
+# For more details about the following options, see:
+# https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2
+lora_r: 8
+lora_alpha: 16
+lora_dropout: 0.05
+lora_target_modules:
+ - q_proj
+ - v_proj
+# - k_proj
+# - o_proj
+# - gate_proj
+# - down_proj
+# - up_proj
+lora_target_linear: # If true, will target all linear modules
+peft_layers_to_transform: # The layer indices to transform, otherwise, apply to all layers
+
+# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.
+# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models.
+# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities.
+# https://github.com/huggingface/peft/issues/334#issuecomment-1561727994
+lora_modules_to_save:
+# - embed_tokens
+# - lm_head
+
+lora_fan_in_fan_out: false
+
+peft:
+ # Configuration options for loftq initialization for LoRA
+ # https://huggingface.co/docs/peft/developer_guides/quantization#loftq-initialization
+ loftq_config:
+ loftq_bits: # typically 4 bits
+
+# ReLoRA configuration
+# Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed
+relora_steps: # Number of steps per ReLoRA restart
+relora_warmup_steps: # Number of per-restart warmup steps
+relora_anneal_steps: # Number of anneal steps for each relora cycle
+relora_prune_ratio: # threshold for optimizer magnitude when pruning
+relora_cpu_offload: # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings
+
+# wandb configuration if you're using it
+# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.
+wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb
+wandb_project: # Your wandb project name
+wandb_entity: # A wandb Team name if using a Team
+wandb_watch:
+wandb_name: # Set the name of your wandb run
+wandb_run_id: # Set the ID of your wandb run
+wandb_log_model: # "checkpoint" to log model to wandb Artifacts every `save_steps` or "end" to log only at the end of training
+
+# mlflow configuration if you're using it
+mlflow_tracking_uri: # URI to mlflow
+mlflow_experiment_name: # Your experiment name
+hf_mlflow_log_artifacts: # set to true to copy each saved checkpoint on each save to mlflow artifact registry
+
+# Where to save the full-finetuned model to
+output_dir: ./completed-model
+
+# Whether to use torch.compile and which backend to use
+torch_compile: # bool
+torch_compile_backend: # Optional[str]
+
+# Training hyperparameters
+
+# If greater than 1, backpropagation will be skipped and the gradients will be accumulated for the given number of steps.
+gradient_accumulation_steps: 1
+# The number of samples to include in each batch. This is the number of samples sent to each GPU.
+micro_batch_size: 2
+eval_batch_size:
+num_epochs: 4
+warmup_steps: 100 # cannot use with warmup_ratio
+warmup_ratio: 0.05 # cannot use with warmup_steps
+learning_rate: 0.00003
+lr_quadratic_warmup:
+logging_steps:
+eval_steps: # Leave empty to eval at each epoch, integers for every N steps. decimal for fraction of total steps
+evals_per_epoch: # number of times per epoch to run evals, mutually exclusive with eval_steps
+save_strategy: # Set to `no` to skip checkpoint saves
+save_steps: # Leave empty to save at each epoch
+saves_per_epoch: # number of times per epoch to save a checkpoint, mutually exclusive with save_steps
+save_total_limit: # Checkpoints saved at a time
+# Maximum number of iterations to train for. It precedes num_epochs which means that
+# if both are set, num_epochs will not be guaranteed.
+# e.g., when 1 epoch is 1000 steps => `num_epochs: 2` and `max_steps: 100` will train for 100 steps
+max_steps:
+
+eval_table_size: # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0
+eval_max_new_tokens: # Total number of tokens generated for predictions sent to wandb. Default is 128
+eval_causal_lm_metrics: # HF evaluate metrics used during evaluation. Default is ["sacrebleu", "comet", "ter", chrf]
+
+loss_watchdog_threshold: # High loss value, indicating the learning has broken down (a good estimate is ~2 times the loss at the start of training)
+loss_watchdog_patience: # Number of high-loss steps in a row before the trainer aborts (default: 3)
+
+# Save model as safetensors (require safetensors package)
+save_safetensors:
+
+# Whether to mask out or include the human's prompt from the training labels
+train_on_inputs: false
+# Group similarly sized data to minimize padding.
+# May be slower to start, as it must download and sort the entire dataset.
+# Note that training loss may have an oscillating pattern with this enabled.
+group_by_length: false
+
+# Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing
+gradient_checkpointing: false
+# additional kwargs to pass to the trainer for gradient checkpointing
+# gradient_checkpointing_kwargs:
+# use_reentrant: true
+
+# Stop training after this many evaluation losses have increased in a row
+# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback
+early_stopping_patience: 3
+
+# Specify a scheduler and kwargs to use with the optimizer
+lr_scheduler: # 'one_cycle' | 'log_sweep' | empty for cosine
+lr_scheduler_kwargs:
+cosine_min_lr_ratio: # decay lr to some percentage of the peak lr, e.g. cosine_min_lr_ratio=0.1 for 10% of peak lr
+cosine_constant_lr_ratio: # freeze lr at some percentage of the step, e.g. cosine_constant_lr_ratio=0.8 means start cosine_min_lr at 80% of training step (https://arxiv.org/pdf/2308.04014.pdf)
+
+# For one_cycle optim
+lr_div_factor: # Learning rate div factor
+
+# Specify optimizer
+# Valid values are driven by the Transformers OptimizerNames class, see:
+# https://github.com/huggingface/transformers/blob/95b374952dc27d8511541d6f5a4e22c9ec11fb24/src/transformers/training_args.py#L134
+#
+# Note that not all optimizers may be available in your environment, ex: 'adamw_anyprecision' is part of
+# torchdistx, 'adamw_bnb_8bit' is part of bnb.optim.Adam8bit, etc. When in doubt, it is recommended to start with the optimizer used
+# in the examples/ for your model and fine-tuning use case.
+#
+# Valid values for 'optimizer' include:
+# - adamw_hf
+# - adamw_torch
+# - adamw_torch_fused
+# - adamw_torch_xla
+# - adamw_apex_fused
+# - adafactor
+# - adamw_anyprecision
+# - sgd
+# - adagrad
+# - adamw_bnb_8bit
+# - lion_8bit
+# - lion_32bit
+# - paged_adamw_32bit
+# - paged_adamw_8bit
+# - paged_lion_32bit
+# - paged_lion_8bit
+# - galore_adamw
+# - galore_adamw_8bit
+# - galore_adafactor
+# - galore_adamw_layerwise
+# - galore_adamw_8bit_layerwise
+# - galore_adafactor_layerwise
+optimizer:
+# Dictionary of arguments to pass to the optimizer
+optim_args:
+# For Galore Optimizers the following optim_args are available
+# rank: # type: int
+# update_proj_gap # type: int
+# scale # type: float
+# proj_type: # type: str, default = std
+
+# The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm
+optim_target_modules:
+# - self_attn # for llama
+# - mlp
+
+# Specify weight decay
+weight_decay:
+# adamw hyperparams
+adam_beta1:
+adam_beta2:
+adam_epsilon:
+# Gradient clipping max norm
+max_grad_norm:
+
+# Augmentation techniques
+# NEFT https://arxiv.org/abs/2310.05914, set this to a number (paper default is 5) to add noise to embeddings
+# currently only supported on Llama and Mistral
+neftune_noise_alpha:
+
+# Whether to bettertransformers
+flash_optimum:
+# Whether to use xformers attention patch https://github.com/facebookresearch/xformers:
+xformers_attention:
+# Whether to use flash attention patch https://github.com/Dao-AILab/flash-attention:
+flash_attention:
+flash_attn_cross_entropy: # Whether to use flash-attention cross entropy implementation - advanced use only
+flash_attn_rms_norm: # Whether to use flash-attention rms norm implementation - advanced use only
+flash_attn_fuse_qkv: # Whether to fuse QKV into a single operation
+flash_attn_fuse_mlp: # Whether to fuse part of the MLP into a single operation
+# Whether to use scaled-dot-product attention
+# https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html
+sdp_attention:
+# Shifted-sparse attention (only llama) - https://arxiv.org/pdf/2309.12307.pdf
+s2_attention:
+# Resume from a specific checkpoint dir
+resume_from_checkpoint:
+# If resume_from_checkpoint isn't set and you simply want it to start where it left off.
+# Be careful with this being turned on between different models.
+auto_resume_from_checkpoints: false
+
+# Don't mess with this, it's here for accelerate and torchrun
+local_rank:
+
+# Add or change special tokens.
+# If you add tokens here, you don't need to add them to the `tokens` list.
+special_tokens:
+ # bos_token: ""
+ # eos_token: ""
+ # unk_token: ""
+
+# Add extra tokens.
+tokens:
+
+# FSDP
+fsdp:
+fsdp_config:
+
+# Deepspeed config path. e.g., deepspeed_configs/zero3.json
+deepspeed:
+
+# Advanced DDP Arguments
+ddp_timeout:
+ddp_bucket_cap_mb:
+ddp_broadcast_buffers:
+
+# Path to torch distx for optim 'adamw_anyprecision'
+torchdistx_path:
+
+# Set to HF dataset for type: 'completion' for streaming instead of pre-tokenize
+pretraining_dataset:
+
+# Debug mode
+debug:
+
+# Seed
+seed:
+
+# Allow overwrite yml config using from cli
+strict:
```
diff --git a/docs/dataset-formats/conversation.qmd b/docs/dataset-formats/conversation.qmd
new file mode 100644
index 0000000000..9e69df4927
--- /dev/null
+++ b/docs/dataset-formats/conversation.qmd
@@ -0,0 +1,71 @@
+---
+title: Conversation
+description: Conversation format for supervised fine-tuning.
+order: 1
+---
+
+## Formats
+
+### sharegpt
+
+conversations where `from` is `human`/`gpt`. (optional: first row with role `system` to override default system prompt)
+
+```{.json filename="data.jsonl"}
+{"conversations": [{"from": "...", "value": "..."}]}
+```
+
+Note: `type: sharegpt` opens a special config `conversation:` that enables conversions to many Conversation types. See [the docs](../docs/config.qmd) for all config options.
+
+### pygmalion
+
+```{.json filename="data.jsonl"}
+{"conversations": [{"role": "...", "value": "..."}]}
+```
+
+### sharegpt.load_role
+
+conversations where `role` is used instead of `from`
+
+```{.json filename="data.jsonl"}
+{"conversations": [{"role": "...", "value": "..."}]}
+```
+
+### sharegpt.load_guanaco
+
+conversations where `from` is `prompter` `assistant` instead of default sharegpt
+
+```{.json filename="data.jsonl"}
+{"conversations": [{"from": "...", "value": "..."}]}
+```
+
+### sharegpt_jokes
+
+creates a chat where bot is asked to tell a joke, then explain why the joke is funny
+
+```{.json filename="data.jsonl"}
+{"conversations": [{"title": "...", "text": "...", "explanation": "..."}]}
+```
+
+## How to add custom prompts for instruction-tuning
+
+For a dataset that is preprocessed for instruction purposes:
+
+```{.json filename="data.jsonl"}
+{"input": "...", "output": "..."}
+```
+
+You can use this example in your YAML config:
+
+```{.yaml filename="config.yaml"}
+datasets:
+ - path: repo
+ type:
+ system_prompt: ""
+ field_system: system
+ field_instruction: input
+ field_output: output
+ format: "[INST] {instruction} [/INST]"
+ no_input_format: "[INST] {instruction} [/INST]"
+```
+
+See full config options under [here](../docs/config.qmd).
diff --git a/docs/dataset-formats/index.qmd b/docs/dataset-formats/index.qmd
new file mode 100644
index 0000000000..91873a4c19
--- /dev/null
+++ b/docs/dataset-formats/index.qmd
@@ -0,0 +1,14 @@
+---
+title: Dataset Formats
+description: Supported dataset formats.
+listing:
+ fields: [title, description]
+ type: table
+ sort-ui: false
+ filter-ui: false
+ max-description-length: 250
+---
+
+Axolotl supports a variety of dataset formats. It is recommended to use a JSONL format. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.
+
+Below are these various formats organized by task:
diff --git a/docs/dataset-formats/inst_tune.qmd b/docs/dataset-formats/inst_tune.qmd
new file mode 100644
index 0000000000..cc8cd16f30
--- /dev/null
+++ b/docs/dataset-formats/inst_tune.qmd
@@ -0,0 +1,165 @@
+---
+title: Instruction Tuning
+description: Instruction tuning formats for supervised fine-tuning.
+order: 2
+---
+
+## alpaca
+
+instruction; input(optional)
+
+```{.json filename="data.jsonl"}
+{"instruction": "...", "input": "...", "output": "..."}
+```
+
+## jeopardy
+
+question and answer
+
+```{.json filename="data.jsonl"}
+{"question": "...", "category": "...", "answer": "..."}
+```
+
+## oasst
+
+instruction
+
+```{.json filename="data.jsonl"}
+{"INSTRUCTION": "...", "RESPONSE": "..."}
+```
+
+## gpteacher
+
+instruction; input(optional)
+
+```{.json filename="data.jsonl"}
+{"instruction": "...", "input": "...", "response": "..."}
+```
+
+## reflection
+
+instruction with reflect; input(optional)
+
+```{.json filename="data.jsonl"}
+{"instruction": "...", "input": "...", "output": "...", "reflection": "...", "corrected": "..."}
+```
+
+## explainchoice
+
+question, choices, (solution OR explanation)
+
+```{.json filename="data.jsonl"}
+{"question": "...", "choices": ["..."], "solution": "...", "explanation": "..."}
+```
+
+## concisechoice
+
+question, choices, (solution OR explanation)
+
+```{.json filename="data.jsonl"}
+{"question": "...", "choices": ["..."], "solution": "...", "explanation": "..."}
+```
+
+## summarizetldr
+
+article and summary
+
+```{.json filename="data.jsonl"}
+{"article": "...", "summary": "..."}
+```
+
+## alpaca_chat
+
+basic instruct for alpaca chat
+
+```{.json filename="data.jsonl"}
+{"instruction": "...", "input": "...", "response": "..."}
+```
+
+## alpaca_chat.load_qa
+
+question and answer for alpaca chat
+
+```{.json filename="data.jsonl"}
+{"question": "...", "answer": "..."}
+```
+
+## alpaca_chat.load_concise
+
+question and answer for alpaca chat, for concise answers
+
+```{.json filename="data.jsonl"}
+{"instruction": "...", "input": "...", "response": "..."}
+```
+
+## alpaca_chat.load_camel_ai
+
+question and answer for alpaca chat, for load_camel_ai
+
+```{.json filename="data.jsonl"}
+{"message_1": "...", "message_2": "..."}
+```
+
+## alpaca_w_system.load_open_orca
+
+support for open orca datasets with included system prompts, instruct
+
+```{.json filename="data.jsonl"}
+{"system_prompt": "...", "question": "...", "response": "..."}
+```
+
+## context_qa
+
+in context question answering from an article
+
+```{.json filename="data.jsonl"}
+{"article": "...", "question": "...", "answer": "..."}
+```
+
+## context_qa.load_v2
+
+in context question answering (alternate)
+
+```{.json filename="data.jsonl"}
+{"context": "...", "question": "...", "answer": "..."}
+```
+
+## context_qa.load_404
+
+in context question answering from an article, with default response for no answer from context
+
+```{.json filename="data.jsonl"}
+{"article": "...", "unanswerable_question": "..."}
+```
+
+## creative_acr.load_answer
+
+instruction and revision
+
+```{.json filename="data.jsonl"}
+{"instruction": "...", "revision": "..."}
+```
+
+## creative_acr.load_critique
+
+critique
+
+```{.json filename="data.jsonl"}
+{"scores": "...", "critiques": "...", "instruction": "...", "answer": "..."}
+```
+
+## creative_acr.load_revise
+
+critique and revise
+
+```{.json filename="data.jsonl"}
+{"scores": "...", "critiques": "...", "instruction": "...", "answer": "...", "revision": "..."}
+```
+
+## metharme
+
+instruction, adds additional eos tokens
+
+```{.json filename="data.jsonl"}
+{"prompt": "...", "generation": "..."}
+```
diff --git a/docs/dataset-formats/pretraining.qmd b/docs/dataset-formats/pretraining.qmd
new file mode 100644
index 0000000000..7e7257205a
--- /dev/null
+++ b/docs/dataset-formats/pretraining.qmd
@@ -0,0 +1,26 @@
+---
+title: Pre-training
+description: Data format for a pre-training completion task.
+order: 3
+---
+
+For pretraining, there is no prompt template or roles. The only required field is `text`:
+
+```{.json filename="data.jsonl"}
+{"text": "first row"}
+{"text": "second row"}
+...
+```
+
+:::{.callout-note}
+
+### Streaming is recommended for large datasets
+
+Axolotl usually loads the entire dataset into memory. This will be challenging for large datasets. Use the following config to enable streaming:
+
+```{.yaml filename="config.yaml"}
+pretraining_dataset: # hf path only
+...
+```
+
+:::
diff --git a/docs/dataset-formats/template_free.qmd b/docs/dataset-formats/template_free.qmd
new file mode 100644
index 0000000000..5087d6a013
--- /dev/null
+++ b/docs/dataset-formats/template_free.qmd
@@ -0,0 +1,7 @@
+---
+title: Template-Free
+description: Construct prompts without a template.
+order: 4
+---
+
+See [these docs](../input_output.qmd).
diff --git a/docs/dataset-formats/tokenized.qmd b/docs/dataset-formats/tokenized.qmd
new file mode 100644
index 0000000000..8991a21109
--- /dev/null
+++ b/docs/dataset-formats/tokenized.qmd
@@ -0,0 +1,12 @@
+---
+title: Custom Pre-Tokenized Dataset
+description: How to use a custom pre-tokenized dataset.
+order: 5
+---
+
+- Do not pass a `type:` in your axolotl config.
+- Columns in Dataset must be exactly `input_ids`, `attention_mask`, `labels`
+
+```{.yaml filename="config.yml"}
+- path: ...
+```
diff --git a/docs/fsdp_qlora.qmd b/docs/fsdp_qlora.qmd
index 69b4ad4454..7f12d44935 100644
--- a/docs/fsdp_qlora.qmd
+++ b/docs/fsdp_qlora.qmd
@@ -1,5 +1,5 @@
---
-title: FDSP + QLoRA
+title: "FDSP + QLoRA"
description: Use FSDP with QLoRA to fine-tune large LLMs on consumer GPUs.
format:
html:
diff --git a/docs/input_output.qmd b/docs/input_output.qmd
index 4e2ea1345f..6261f23895 100644
--- a/docs/input_output.qmd
+++ b/docs/input_output.qmd
@@ -91,8 +91,9 @@ format into a jsonl file (below is the first row from the file
```bash
$ head -n1 output.jsonl | python -m json.tool
+```
-{.cell-output .cell-output-stdout}
+:::{.cell-output .cell-output-stdout}
{
"segments": [
{
@@ -113,7 +114,7 @@ $ head -n1 output.jsonl | python -m json.tool
}
]
}
-```
+:::
Set `label:false` when you want to mask a segment of text so that the
model isn't trained on it. Some things to keep in mind:
@@ -238,8 +239,9 @@ version is repeated below for reference):
```bash
$ head -n1 output.jsonl | python -m json.tool
+```
-{.cell-output .cell-output-stdout}
+:::{.cell-output .cell-output-stdout}
{
"segments": [
{
@@ -260,4 +262,4 @@ $ head -n1 output.jsonl | python -m json.tool
}
]
}
-```
+:::