diff --git a/lab4.ipynb b/lab4.ipynb index 168b6c1..962edf1 100644 --- a/lab4.ipynb +++ b/lab4.ipynb @@ -35,8 +35,7 @@ { "cell_type": "markdown", "source": [ - "- In this chapter, we implement the training loop and code for basic model evaluation to pretrain an LLM\n", - "- At the end of this chapter, we also load openly available pretrained weights from OpenAI into our model" + "- In this chapter, we implement the training loop and code for basic model evaluation to pretrain an LLM" ], "metadata": { "collapsed": false @@ -1862,28 +1861,11 @@ } } }, - { - "cell_type": "markdown", - "source": [ - "## 5.5 Loading pretrained weights from OpenAI" - ], - "metadata": { - "collapsed": false - } - }, { "cell_type": "markdown", "source": [ "- Previously, we only trained a small GPT-2 model using a very small short-story book for educational purposes\n", - "- Fortunately, we don't have to spend tens to hundreds of thousands of dollars to pretrain the model on a large pretraining corpus but can load the pretrained weights provided by OpenAI" - ], - "metadata": { - "collapsed": false - } - }, - { - "cell_type": "markdown", - "source": [ + "- Fortunately, we don't have to spend tens to hundreds of thousands of dollars to pretrain the model on a large pretraining corpus but can load the pretrained weights provided by OpeaAI or other vendors.\n", "- We can also use Hugging Face Hub to load weights." ], "metadata": {