Skip to content

yzashish/Building-an-LLM-From-Scratch

 
 

Repository files navigation

Building a Large Language Model From Scratch

This repository contains the code for developing, pretraining, and finetuning a GPT-like LLM.


You'll learn and understand how large language models (LLMs) work from the inside out by coding them from the ground up, step by step. I'll guide you through creating your own LLM, explaining each stage with clear text, diagrams, and examples.

The method described for training and developing your own small-but-functional model for educational purposes mirrors the approach used in creating large-scale foundational models such as those behind ChatGPT. In addition, this repository includes code for loading the weights of larger pretrained models for finetuning.


git clone --depth 1 https://github.com/Sangwan70/Building-an-LLM-From-Scratch.git


Table of Contents


Tip

If you're seeking guidance on installing Python and Python packages and setting up your code environment, I suggest reading the README.md file located in the setup directory.



Chapter Title Main Code (for Quick Access) All Code + Supplementary
Setup recommendations - -
Part 1: Working with Text Data - Part 1: Working with Text
- dataloader.ipynb (summary)
- exercise-solutions.ipynb
./part_1
Part 2: Coding Attention Mechanisms - Part 2: Coding Attention Mechanisms
- multihead-attention.ipynb (summary)
- exercise-solutions.ipynb
./part_2
Part 3: Implementing a GPT Model from Scratch - Part 3: Implementing a GPT model from Scratch
- gpt.py (summary)
- exercise-solutions.ipynb
./part_3
Part 4: Pretraining on Unlabeled Data - Part 4: Pretraining on Unlabeled Data
- gpt_train.py (summary)
- gpt_generate.py (summary)
- exercise-solutions.ipynb
./part_4
Part 5: Finetuning for Text Classification - Part 5: Finetuning for Text Classification
- gpt_class_finetune.py
- exercise-solutions.ipynb
./part_5
Part 6: Finetuning to Follow Instructions - Part 6: Finetuning To Follow Instructions
- gpt_instruction_finetuning.py (summary)
- ollama_evaluate.py (summary)
- exercise-solutions.ipynb
./part_6
Appendix A: Introduction to PyTorch - code-part1.ipynb
- code-part2.ipynb
- DDP-script.py
- exercise-solutions.ipynb
./appendix-A
Appendix B: Adding Bells and Whistles to the Training Loop - appendix-B.ipynb ./appendix-B
Appendix C: Parameter-efficient Finetuning with LoRA - appendix-C.ipynb ./appendix-C

 

Hardware Requirements

The code in the main chapters of this book is designed to run on conventional laptops within a reasonable timeframe and does not require specialized hardware. This approach ensures that a wide audience can engage with the material. Additionally, the code automatically utilizes GPUs if they are available. (Please see the setup doc for additional recommendations.)

 

Bonus Material

Several folders contain optional materials as a bonus for interested readers:


 

Questions, Feedback, and Contributing to This Repository

I welcome all sorts of feedback, best shared via GitHub Discussions. Likewise, if you have any questions or just want to bounce ideas off others, please don't hesitate to post these in the forum as well.

Please note that since this repository contains the code corresponding to a print book, I currently cannot accept contributions that would extend the contents of the main chapter code, as it would introduce deviations from the physical book. Keeping it consistent helps ensure a smooth experience for everyone.

 

Citation

BibTeX entry:

  author       = {Ram N Sangwan},
  title        = {Building An LLM From Scratch}
  github       = {https://github.com/Sangwan70/Building-an-LLM-From-Scratch}
}

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 79.3%
  • Python 20.7%