This repository contains the code for developing, pretraining, and finetuning a GPT-like LLM.
You'll learn and understand how large language models (LLMs) work from the inside out by coding them from the ground up, step by step. I'll guide you through creating your own LLM, explaining each stage with clear text, diagrams, and examples.
The method described for training and developing your own small-but-functional model for educational purposes mirrors the approach used in creating large-scale foundational models such as those behind ChatGPT. In addition, this repository includes code for loading the weights of larger pretrained models for finetuning.
- Link to the official Source Code
git clone --depth 1 https://github.com/Sangwan70/Building-an-LLM-From-Scratch.git
Tip
If you're seeking guidance on installing Python and Python packages and setting up your code environment, I suggest reading the README.md file located in the setup directory.
The code in the main chapters of this book is designed to run on conventional laptops within a reasonable timeframe and does not require specialized hardware. This approach ensures that a wide audience can engage with the material. Additionally, the code automatically utilizes GPUs if they are available. (Please see the setup doc for additional recommendations.)
Several folders contain optional materials as a bonus for interested readers:
- Setup
- Part 1: Working with text data
- Part 2: Coding attention mechanisms
- Part 3: Implementing a GPT model from scratch
- Part 4: Pretraining on unlabeled data:
- Alternative Weight Loading from Hugging Face Model Hub using Transformers
- Pretraining GPT on the Project Gutenberg Dataset
- Adding Bells and Whistles to the Training Loop
- Optimizing Hyperparameters for Pretraining
- Building a User Interface to Interact With the Pretrained LLM
- Converting GPT to Llama
- Llama 3.2 From Scratch
- Memory-efficient Model Weight Loading
- Part 5: Finetuning for classification
- Part 6: Finetuning to follow instructions
- Dataset Utilities for Finding Near Duplicates and Creating Passive Voice Entries
- Evaluating Instruction Responses Using the OpenAI API and Ollama
- Generating a Dataset for Instruction Finetuning
- Improving a Dataset for Instruction Finetuning
- Generating a Preference Dataset with Llama 3.1 70B and Ollama
- Direct Preference Optimization (DPO) for LLM Alignment
- Building a User Interface to Interact With the Instruction Finetuned GPT Model
I welcome all sorts of feedback, best shared via GitHub Discussions. Likewise, if you have any questions or just want to bounce ideas off others, please don't hesitate to post these in the forum as well.
Please note that since this repository contains the code corresponding to a print book, I currently cannot accept contributions that would extend the contents of the main chapter code, as it would introduce deviations from the physical book. Keeping it consistent helps ensure a smooth experience for everyone.
BibTeX entry:
author = {Ram N Sangwan},
title = {Building An LLM From Scratch}
github = {https://github.com/Sangwan70/Building-an-LLM-From-Scratch}
}