diff --git a/examples/colab-notebooks/colab-axolotl-example.ipynb b/examples/colab-notebooks/colab-axolotl-example.ipynb index 0acaf7961c..bfd1709a7f 100644 --- a/examples/colab-notebooks/colab-axolotl-example.ipynb +++ b/examples/colab-notebooks/colab-axolotl-example.ipynb @@ -2,19 +2,15 @@ "cells": [ { "cell_type": "markdown", - "metadata": { - "id": "AKjdG7tbTb-n" - }, + "metadata": {}, "source": [ - "# Example notebook for running Axolotl on google colab" + "## Setting up" ] }, { "cell_type": "code", "execution_count": null, - "metadata": { - "id": "RcbNpOgWRcii" - }, + "metadata": {}, "outputs": [], "source": [ "import torch\n", @@ -22,82 +18,76 @@ "assert (torch.cuda.is_available()==True)" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!pip install axolotl[deepspeed]" + ] + }, { "cell_type": "markdown", - "metadata": { - "id": "h3nLav8oTRA5" - }, + "metadata": {}, "source": [ - "## Install Axolotl and dependencies" + "## Hugging Face login (optional)" ] }, { "cell_type": "code", "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "3c3yGAwnOIdi", - "outputId": "e3777b5a-40ef-424f-e181-62dfecd1dd01" - }, + "metadata": {}, "outputs": [], "source": [ - "!pip install -e git+https://github.com/axolotl-ai-cloud/axolotl#egg=axolotl\n", - "!pip install flash-attn==\"2.7.0.post2\"\n", - "!pip install deepspeed==\"0.13.1\"!pip install mlflow==\"2.13.0\"" + "from huggingface_hub import notebook_login\n", + "notebook_login()" ] }, { "cell_type": "markdown", - "metadata": { - "id": "BW2MFr7HTjub" - }, + "metadata": {}, "source": [ - "## Create an yaml config file" + "## Example configuration" ] }, { "cell_type": "code", "execution_count": null, - "metadata": { - "id": "9pkF2dSoQEUN" - }, + "metadata": {}, "outputs": [], "source": [ "import yaml\n", "\n", - "# Your YAML string\n", "yaml_string = \"\"\"\n", - "base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T\n", - "model_type: LlamaForCausalLM\n", - "tokenizer_type: LlamaTokenizer\n", + "base_model: NousResearch/Meta-Llama-3.1-8B\n", "\n", "load_in_8bit: false\n", "load_in_4bit: true\n", "strict: false\n", "\n", "datasets:\n", - " - path: mhenrichsen/alpaca_2k_test\n", + " - path: tatsu-lab/alpaca\n", " type: alpaca\n", - "dataset_prepared_path:\n", + "dataset_prepared_path: last_run_prepared\n", "val_set_size: 0.05\n", - "output_dir: ./outputs/qlora-out\n", - "\n", - "adapter: qlora\n", - "lora_model_dir:\n", + "output_dir: ./outputs/lora-out\n", "\n", - "sequence_len: 4096\n", + "sequence_len: 2048\n", "sample_packing: true\n", - "eval_sample_packing: false\n", + "eval_sample_packing: true\n", "pad_to_sequence_len: true\n", "\n", + "adapter: qlora\n", + "lora_model_dir:\n", "lora_r: 32\n", "lora_alpha: 16\n", "lora_dropout: 0.05\n", - "lora_target_modules:\n", "lora_target_linear: true\n", "lora_fan_in_fan_out:\n", + "lora_modules_to_save:\n", + " - embed_tokens\n", + " - lm_head\n", "\n", "wandb_project:\n", "wandb_entity:\n", @@ -105,12 +95,12 @@ "wandb_name:\n", "wandb_log_model:\n", "\n", - "gradient_accumulation_steps: 4\n", - "micro_batch_size: 2\n", - "num_epochs: 4\n", - "optimizer: paged_adamw_32bit\n", + "gradient_accumulation_steps: 2\n", + "micro_batch_size: 1\n", + "num_epochs: 1\n", + "optimizer: paged_adamw_8bit\n", "lr_scheduler: cosine\n", - "learning_rate: 0.0002\n", + "learning_rate: 2e-5\n", "\n", "train_on_inputs: false\n", "group_by_length: false\n", @@ -121,13 +111,15 @@ "gradient_checkpointing: true\n", "early_stopping_patience:\n", "resume_from_checkpoint:\n", - "local_rank:\n", "logging_steps: 1\n", "xformers_attention:\n", - "flash_attention: true\n", + "flash_attention: false\n", + "sdp_attention: true\n", "\n", - "warmup_steps: 10\n", - "evals_per_epoch: 4\n", + "warmup_steps: 1\n", + "max_steps: 25\n", + "evals_per_epoch: 1\n", + "eval_table_size:\n", "saves_per_epoch: 1\n", "debug:\n", "deepspeed:\n", @@ -135,9 +127,10 @@ "fsdp:\n", "fsdp_config:\n", "special_tokens:\n", - "\n", + " pad_token: <|end_of_text|>\n", "\"\"\"\n", "\n", + "\n", "# Convert the YAML string to a Python dictionary\n", "yaml_dict = yaml.safe_load(yaml_string)\n", "\n", @@ -146,31 +139,124 @@ "\n", "# Write the YAML file\n", "with open(file_path, 'w') as file:\n", - " yaml.dump(yaml_dict, file)\n" + " yaml.dump(yaml_dict, file)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Above we have a configuration file with base LLM model and datasets specified, among many other things. Axolotl can automatically detect whether the specified datasets are on HuggingFace repo or local machine.\n", + "\n", + "The Axolotl configuration options encompass model and dataset selection, data pre-processing, and training. Let's go through them line by line:\n", + "\n", + "* \"base model\": String value, specifies the underlying pre-trained LLM that will be used for finetuning\n", + "\n", + "Next we have options for model weights quantization. Quantization allows for reduction in occupied memory on GPUs.\n", + "\n", + "* \"load_in_8bit\": Boolean value, whether to quantize the model weights into 8-bit integer.\n", + "\n", + "* \"load_in_4bit\": Boolean value, whether to quantize the model weights into 4-bit integer.\n", + "\n", + "* \"strict\": Boolean value. If false, it allows for overriding established configuration options in the yaml file when executing in command-line interface.\n", + "\n", + "* \"datasets\": a list of dicts that contain path and type of data sets as well as other optional configurations where datasets are concerned. Supports multiple datasets.\n", + "\n", + "* \"val_set_size\": Either a float value less than one or an integer less than the total size of dataset. Sets the size of validation set from the whole dataset. If float, sets the proportion of the dataset assigned for validation. If integer, sets the direct size of validation set.\n", + "\n", + "* \"output_dir\": String value. Path of trained model.\n", + "\n", + "For data preprocessing:\n", + "\n", + "* \"sequence_len\": Integer. Specifies the maximum sequence length of the input. Typically 2048 or less.\n", + "\n", + "* \"pad_to_sequence_len\": Boolean. Padding input to maximum sequence length.\n", + "\n", + "* \"sample_packing\": Boolean. Specifies whether to use multi-packing with block diagonal attention.\n", + "\n", + "* \"special_tokens\": Python dict, optional. Allows users to specify the additional special tokens to be ignored by the tokenizer.\n", + "\n", + "For LoRA configuration and its hyperparamters:\n", + "\n", + "* \"adapter\": String. Either \"lora\" or \"qlora\", depending on user's choice.\n", + "\n", + "* \"lora_model_dir\": String, Optional. Path to directory that contains LoRA model, if there is already a trained LoRA model the user would like to use.\n", + "\n", + "* \"lora_r\": Integer. Refers to the rank of LoRA decomposition matrices. Higher value will reduce LoRA efficiency. Recommended to be set to 8.\n", + "\n", + "* \"lora_alpha\": Integer. Scale the weight matrices by $\\frac{\\text{lora_alpha}}{\\text{lora_r}}$Recommended to be fixed at 16.\n", + "\n", + "* \"lora_dropout\": Float that is 1 or less. The dropout probability of a lora layer.\n", + "\n", + "* \"lora_target_linear\": Boolean. If true, lora will target all linear modules in the transformers architecture.\n", + "\n", + "* \"lora_modules_to_save\": If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.\n", + "\n", + "See [LoRA](https://arxiv.org/abs/2106.09685) for detailed explanation of LoRA implementation.\n", + "\n", + "For the training configurations:\n", + "\n", + "* \"gradient_accumulation_steps\": Integer. The number of steps over which to accumulate gradient for batch training. E.g. if 2, backprop is performed every two steps.\n", + "\n", + "* \"micro_batch_size\": Integer. Batch size per gpu / gradient_accumulation_steps\n", + "\n", + "* \"num_epochs\": Integer. Number of epochs. One epoch is when training has looped over every batch in the whole data set once.\n", + "\n", + "* \"optimizer\": The optimizer to use for the training.\n", + "\n", + "* \"learning_rate\": The learning rate.\n", + "\n", + "* \"lr_scheduler\": The learning rate scheduler to use for adjusting learning rate during training.\n", + "\n", + "* \"train_on_inputs\": Boolean. Whether to ignore or include the user's prompt from the training labels.\n", + "\n", + "* \"group_by_length\": Boolean. Whether to group similarly sized data to minimize padding.\n", + "\n", + "* \"bf16\": Either \"auto\", \"true\", or \"false\". Whether to use CUDA bf16 floating point format. If set to \"auto\", will automatically apply bf16 should the gpu supports it.\n", + "\n", + "* \"fp16\": Optional. Specifies whether to use CUDA fp16. Automatically set to true if \"bf16\" is set to true. Otherwise false.\n", + "\n", + "* \"tf32\": Boolean. Whether to use CUDA tf32. Will override bf16.\n", + "\n", + "* \"gradient_checkpointing\": Boolean. Whether to use gradient checkpointing https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing\n", + "\n", + "* \"gradient_checkpointing_kwargs\": Python Dict. Fed into the trainer.\n", + "\n", + "* \"logging_steps\": Integer. Log training information over every specified number of steps.\n", + "\n", + "* \"flash_attention\": Boolean. Whether to use the [flash attention](https://github.com/Dao-AILab/flash-attention) mechanism.\n", + "\n", + "* \"sdp_attention\": Boolean. Whether to use the Scaled Dot Product attention mechanism (the attention mechanism in the [original implementation](https://arxiv.org/abs/1706.03762) of transformers.)\n", + "\n", + "* \"warmup_steps\": Integer. The number of pre-training steps where a very low learning rate is used.\n", + "\n", + "* \"evals_per_epoch\": Integer. Number of evaluations to be performed within one training epoch.\n", + "\n", + "* \"saves_per_epoch\": Integer. Number of times the model is saved in one training epoch.\n", + "\n", + "* \"weight_decay\": Positive Float. Sets the \"strength\" of weight decay (i.e. setting the coefficient of L2 regularization)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The above is but a snippet aiming to get users familiarized with the types of streamlined configuration options axolotl provides. For a full list of configuration options, see [here](https://axolotl-ai-cloud.github.io/axolotl/docs/config.html)" ] }, { "cell_type": "markdown", - "metadata": { - "id": "bidoj8YLTusD" - }, + "metadata": {}, "source": [ - "## Launch the training" + "Train the model" ] }, { "cell_type": "code", "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "ydTI2Jk2RStU", - "outputId": "d6d0df17-4b53-439c-c802-22c0456d301b" - }, + "metadata": {}, "outputs": [], "source": [ - "# By using the ! the comand will be executed as a bash command\n", "!accelerate launch -m axolotl.cli.train /content/test_axolotl.yaml" ] }, @@ -178,7 +264,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Play with inference" + "Predict with trained model" ] }, { @@ -187,36 +273,85 @@ "metadata": {}, "outputs": [], "source": [ - "# By using the ! the comand will be executed as a bash command\n", "!accelerate launch -m axolotl.cli.inference /content/test_axolotl.yaml \\\n", - " --qlora_model_dir=\"./qlora-out\" --gradio" + " --lora_model_dir=\"./outputs/lora-out\" --gradio" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Deeper Dive" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It is also helpful to gain some familiarity over some of the core inner workings of axolotl" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Configuration Normalization" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Axolotl uses a custom Dict class, called ```DictDefault```\n", + "to store configurations specified in the yaml configuration file (into a Python variable named ```cfg```). The definition for this custom Dict can be found in the [utils/dict.py](https://github.com/axolotl-ai-cloud/axolotl/blob/main/src/axolotl/utils/dict.py)\n", + "\n", + "```DictDefault``` is amended such that calling a missing key from it will result in a ```None``` return type. This is important because if some configuration options aren't specified by the user, the ```None``` type allows Axolotl to perform boolean operations to determine the default settings for missing configurations. For more examples on how this is done, check out [utils/config/__init__.py](https://github.com/axolotl-ai-cloud/axolotl/blob/main/src/axolotl/utils/config/__init__.py)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Loading Models, Tokenizers, and Trainer" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If we inspect [cli.train.py](https://github.com/axolotl-ai-cloud/axolotl/blob/main/src/axolotl/cli/train.py), we will find that most of the heavy lifting were done by the function ```train()``` which is itself imported from [src/axolotl/train.py](https://github.com/axolotl-ai-cloud/axolotl/blob/main/src/axolotl/train.py).\n", + "\n", + "```train()``` takes care of loading the appropriate tokenizer and pre-trained model through ```load_model()``` and ```load_tokenizer()``` from [src/axolotl/utils/models.py](https://github.com/axolotl-ai-cloud/axolotl/blob/main/src/axolotl/utils/models.py) respectively.\n", + "\n", + "```load_tokenizer()``` loads in the appropriate tokenizer given the desired model, as well as chat templates.\n", + "\n", + "```ModelLoader``` class follows after tokenizer has been selected. It will automatically discern the base model type, load in the desired model, as well as applying model-appropriate attention mechanism modifications (e.g. flash attention). Depending on which base model the user chooses in the configuration, ```ModelLoader``` will utilize the corresponding \"attention hijacking\" script. For example, if the user specified the base model to be ```NousResearch/Meta-Llama-3.1-8B```, which is of llama type, and set ```flash_attn``` to ```True```, ```ModelLoader``` will load in [llama_attn_hijack_flash.py](https://github.com/axolotl-ai-cloud/axolotl/blob/main/src/axolotl/monkeypatch/llama_attn_hijack_flash.py). For a list of supported attention hijacking, please refer to the directory [/src/axolotl/monkeypatch/](https://github.com/axolotl-ai-cloud/axolotl/tree/main/src/axolotl/monkeypatch)\n", + "\n", + "Another important operation encompassed in ```train()``` is setting up the training that takes into account of user-specified traning configurations (e.g. num_epochs, optimizer) through the use of ```setup_trainer()``` from [/src/axolotl/utils/trainer.py](https://github.com/axolotl-ai-cloud/axolotl/blob/main/src/axolotl/utils/trainer.py), which in turn relies on modules from [/src/axolotl/core/trainer_builder.py](https://github.com/axolotl-ai-cloud/axolotl/blob/main/src/axolotl/core/trainer_builder.py).\n", + "```trainer_builder.py``` provides a list of trainer object options bespoke for the task type (Causal or Reinforcement learning ('dpo', 'ipo', 'kto') )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Monkey patch\n", + "\n", + "The [Monkey patch directory](https://github.com/axolotl-ai-cloud/axolotl/tree/main/src/axolotl/monkeypatch) is where model architecture/optimization patching scripts are stored (these are modifications that are not implemented in the official releases, hence the name monkey patch). It includes attention jacking, ReLoRA, and unsloth optimization." ] } ], "metadata": { - "accelerator": "GPU", - "colab": { - "gpuType": "T4", - "provenance": [] - }, "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.12.1" + "version": "3.9.6" } }, "nbformat": 4, - "nbformat_minor": 4 + "nbformat_minor": 2 }