Skip to content

Commit

Permalink
feat: References from Training LLMs topic to sources
Browse files Browse the repository at this point in the history
  • Loading branch information
jermnelson committed Sep 7, 2024
1 parent 68adf3c commit 631093f
Show file tree
Hide file tree
Showing 5 changed files with 25 additions and 4 deletions.
2 changes: 1 addition & 1 deletion checklist.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@
- [ ] Fine-tuning or Training LLMs
- [x] 250 words
- [ ] LLMs copyedit
- [ ] References copied into resources
- [x] References copied into resources
- [ ] Generative AI Use Cases for FOLIO
- [ ] Analysis and Management of Financial Orders and Invoices
- [ ] Automated Metadata Generation and Enrichment
Expand Down
4 changes: 2 additions & 2 deletions exploring-llms/training-llms.html
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,11 @@ <h2>Fine-tuning LLMs with Llama.cpp</h2>
don't want or can't compile the C++ source code to run on your computer.</p>
<h3>Downloading a LLaMA-based Model</h3>
<p><a href="https://github.com/ggerganov/llama.cpp">LLaMA.cpp</a> uses the <a href="https://github.com/ggerganov/ggml/blob/master/docs/gguf.md">GGUF</a>
format for model inference and training. Look for GGUF models on <a href="https://huggingface.co/l">HuggingFace</a>
format for model inference and training. Look for GGUF models on <a href="https://huggingface.co/">HuggingFace</a>
and if you compiled <a href="https://github.com/ggerganov/llama.cpp">LLaMA.cpp</a> with <code>libcurl</code> support, you can use the <code>llama-cli</code> command-line
client to download:</p>
<p><code>./llama-cli --hf-repo lmstudio-community/Reflection-Llama-3.1-70B-GGUF --hf-file Reflection-Llama-3.1-70B-GGUF.gguf</code></p>
<p>If <code>libcurl</code> hasn't been installed, you can usually directly download the models directly from <a href="https://huggingface.co/l">HuggingFace</a> and
<p>If <code>libcurl</code> hasn't been installed, you can usually directly download the models directly from <a href="https://huggingface.co/">HuggingFace</a> and
store in the <code>/models</code> directory under the main <a href="https://github.com/ggerganov/llama.cpp">LLaMA.cpp</a>.</p>
<h3>Running the Model in Inference Mode</h3>
<div class="footnote">
Expand Down
2 changes: 1 addition & 1 deletion exploring-llms/training-llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ store in the `/models` directory under the main [LLaMA.cpp][LLAMA.CCP].

### Running the Model in Inference Mode

[HUGFACE]: https://huggingface.co/l
[HUGFACE]: https://huggingface.co/
[LLAMA]: https://ai.meta.com/
[LLAMA.CCP]: https://github.com/ggerganov/llama.cpp
[OPENAI]: https://openai.com/
Expand Down
10 changes: 10 additions & 0 deletions recommended-resources-for-further-learning/sources.html
Original file line number Diff line number Diff line change
Expand Up @@ -209,6 +209,16 @@ <h3>Retrieval Augmented Generation (RAG)</h3>
<ul>
<li><a href="https://www.smashingmagazine.com/2024/01/guide-retrieval-augmented-generation-language-models/">A Simple Guide To Retrieval Augmented Generation Language Models</a></li>
<li><a href="https://arxiv.org/abs/2005.11401">Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks</a></li>
</ul>
<h3>Training Large Language Models (LLMs)</h3>
<ul>
<li><a href="https://github.com/ggerganov/ggml/blob/master/docs/gguf.md">GGUF</a></li>
<li><a href="https://huggingface.co/">HuggingFace</a> </li>
<li><a href="https://ai.meta.com/">LLaMA from AI Meta</a></li>
<li><a href="https://github.com/ggerganov/llama.cpp">LLaMA.cpp</a></li>
<li><a href="https://github.com/ggerganov/llama.cpp/blob/master/docs/docker.md">LLaMA.cpp with Docker</a></li>
<li><a href="https://github.com/abetlen/llama-cpp-python">LLaMA.cpp Python SDK</a></li>
<li><a href="https://openai.com/">Open AI</a></li>
</ul>
</article>
<div class="col-3">
Expand Down
11 changes: 11 additions & 0 deletions recommended-resources-for-further-learning/sources.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,3 +168,14 @@
### Retrieval Augmented Generation (RAG)
- [A Simple Guide To Retrieval Augmented Generation Language Models](https://www.smashingmagazine.com/2024/01/guide-retrieval-augmented-generation-language-models/)
- [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401)

### Training Large Language Models (LLMs)
- [GGUF](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md)
- [HuggingFace](https://huggingface.co/)
- [LLaMA from AI Meta](https://ai.meta.com/)
- [LLaMA.cpp](https://github.com/ggerganov/llama.cpp)
- [LLaMA.cpp with Docker](https://github.com/ggerganov/llama.cpp/blob/master/docs/docker.md)
- [LLaMA.cpp Python SDK](https://github.com/abetlen/llama-cpp-python)
- [Open AI](https://openai.com/)


0 comments on commit 631093f

Please sign in to comment.