Skip to content

Commit

Permalink
feat: Fixes footnotes and copies sources for Hallucinations topic
Browse files Browse the repository at this point in the history
  • Loading branch information
jermnelson committed Sep 1, 2024
1 parent 9e14f82 commit 119fa49
Show file tree
Hide file tree
Showing 5 changed files with 48 additions and 32 deletions.
2 changes: 1 addition & 1 deletion checklist.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
- [x] 250 words
- [ ] LLMs copyedit
- [ ] Add Use Case(s)
- [ ] References copied into resources
- [x] References copied into resources
- [ ] Privacy
- [x] 250 words
- [ ] LLMs copyedit
Expand Down
35 changes: 19 additions & 16 deletions ethical-considerations-ai-ml-for-libraries/hallucinations-llms.html
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ <h1>Hallucinations and Generative AI</h1>
incorrect statements. Because LLMs work through predictive means based on the text and
context of the prompt based on the model weights, the resulting output is not<br />
a deductive process based on the model's training source material. These hallucinations
have been broken down into the following categories<sup id="fnref:TURING"><a class="footnote-ref" href="#fn:TURING">3</a></sup>:</p>
have been broken down into the following categories<sup id="fnref:TURING"><a class="footnote-ref" href="#fn:TURING">1</a></sup>:</p>
<ul>
<li><strong>Fact-conflicting</strong> - the LLMs output contains statements that are known to be false
i.e. 2+2=5</li>
Expand All @@ -52,31 +52,34 @@ <h2>Workshop Exercise</h2>
<pre><code>Who is the first person to swim across the Pacific Ocean?
</code></pre>
<h2>Final Thought</h2>
<blockquote>
<p>TLDR I know I'm being super pedantic but the LLM has no "hallucination problem".
Hallucination is not a bug, it is LLM's greatest feature.
The LLM Assistant has a hallucination problem, and we should fix it.
Andrej Karpathy <sup id="fnref:KARPATHY"><a class="footnote-ref" href="#fn:KARPATHY">1</a></sup> </p>
</blockquote>
<h2>Workshop Use-cases</h2>
<h3>Primary</h3>
<h3>Secondary</h3>
<h3>Tertiary</h3>
<figure>
<blockquote class="blockquote">
<p>
TLDR I know I'm being super pedantic but the LLM has no "hallucination problem".
Hallucination is not a bug, it is LLM's greatest feature.
The LLM Assistant has a hallucination problem, and we should fix it.
</p>
</blockquote>
<figcaption class="blockquote-footer" markdown="span">
Andrej Karpathy <sup><a class="footnote-ref" href="#fn:TURING">3</a></sup>
</figcaption>
</figure>

<h2>Resources</h2>
<ul>
<li>https://medium.com/@colin.fraser/hallucinations-errors-and-dreams-c281a66f3c35</li>
<li><a href="https://medium.com/@colin.fraser/hallucinations-errors-and-dreams-c281a66f3c35">Hallucinations, Errors, and Dreams</a></li>
</ul>
<div class="footnote">
<hr />
<ol>
<li id="fn:KARPATHY">
<p>X post on <a href="https://x.com/karpathy/status/1733299213503787018?lang=en">8 December 2023</a>&#160;<a class="footnote-backref" href="#fnref:KARPATHY" title="Jump back to footnote 1 in the text">&#8617;</a></p>
<li id="fn:TURING">
<p><a href="https://www.turing.com/resources/minimize-llm-hallucinations-strategy">Best Strategies to Minimize Hallucinations in LLMs: A Comprehensive Guide</a>&#160;<a class="footnote-backref" href="#fnref:TURING" title="Jump back to footnote 1 in the text">&#8617;</a></p>
</li>
<li id="fn:NYTIMES">
<p><a href="https://www.nytimes.com/2024/04/15/technology/ai-models-measurement.html">A.I. Has a Measurement Problem</a>&#160;<a class="footnote-backref" href="#fnref:NYTIMES" title="Jump back to footnote 2 in the text">&#8617;</a></p>
</li>
<li id="fn:TURING">
<p><a href="https://www.turing.com/resources/minimize-llm-hallucinations-strategy">Best Strategies to Minimize Hallucinations in LLMs: A Comprehensive Guide</a>&#160;<a class="footnote-backref" href="#fnref:TURING" title="Jump back to footnote 3 in the text">&#8617;</a></p>
<li id="fn:KARPATHY">
<p>X post on <a href="https://x.com/karpathy/status/1733299213503787018?lang=en">8 December 2023</a>&#160;<a class="footnote-backref" href="#fnref:KARPATHY" title="Jump back to footnote 3 in the text">&#8617;</a></p>
</li>
</ol>
</div>
Expand Down
30 changes: 15 additions & 15 deletions ethical-considerations-ai-ml-for-libraries/hallucinations-llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,23 +33,23 @@ Who is the first person to swim across the Pacific Ocean?


## Final Thought
<figure>
<blockquote class="blockquote">
<p>
TLDR I know I'm being super pedantic but the LLM has no "hallucination problem".
Hallucination is not a bug, it is LLM's greatest feature.
The LLM Assistant has a hallucination problem, and we should fix it.
</p>
</blockquote>
<figcaption class="blockquote-footer" markdown="span">
Andrej Karpathy <sup><a class="footnote-ref" href="#fn:TURING">3</a></sup>
</figcaption>
</figure>

> TLDR I know I'm being super pedantic but the LLM has no "hallucination problem".
> Hallucination is not a bug, it is LLM's greatest feature.
> The LLM Assistant has a hallucination problem, and we should fix it.
> Andrej Karpathy [^KARPATHY]
## Workshop Use-cases

### Primary

### Secondary

### Tertiary

## Resources
- https://medium.com/@colin.fraser/hallucinations-errors-and-dreams-c281a66f3c35
- [Hallucinations, Errors, and Dreams](https://medium.com/@colin.fraser/hallucinations-errors-and-dreams-c281a66f3c35)

[^KARPATHY]: X post on [8 December 2023](https://x.com/karpathy/status/1733299213503787018?lang=en)
[^NYTIMES]: [A.I. Has a Measurement Problem](https://www.nytimes.com/2024/04/15/technology/ai-models-measurement.html)
[^TURING]: [Best Strategies to Minimize Hallucinations in LLMs: A Comprehensive Guide](https://www.turing.com/resources/minimize-llm-hallucinations-strategy)
[^NYTIMES]: [A.I. Has a Measurement Problem](https://www.nytimes.com/2024/04/15/technology/ai-models-measurement.html)
[^KARPATHY]: X post on [8 December 2023](https://x.com/karpathy/status/1733299213503787018?lang=en)
7 changes: 7 additions & 0 deletions recommended-resources-for-further-learning/sources.html
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,13 @@ <h3>Creator Attribution and Copyright</h3>
<li><a href="https://spectrum.ieee.org/midjourney-copyright">Generative AI Has a Visual Plagiarism Problem Experiments with Midjourney and DALL-E 3 show a copyright minefield</a></li>
<li><a href="https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf">New York Times Legal Complaint December 2023</a></li>
<li><a href="https://www.arl.org/blog/training-generative-ai-models-on-copyrighted-works-is-fair-use/">Training Generative AI Models on Copyrighted Works Is Fair Use</a></li>
</ul>
<h3>Hallucinations and Generative AI</h3>
<ul>
<li><a href="https://www.nytimes.com/2024/04/15/technology/ai-models-measurement.html">A.I. Has a Measurement Problem</a></li>
<li><a href="https://www.turing.com/resources/minimize-llm-hallucinations-strategy">Best Strategies to Minimize Hallucinations in LLMs: A Comprehensive Guide</a></li>
<li><a href="https://medium.com/@colin.fraser/hallucinations-errors-and-dreams-c281a66f3c35">Hallucinations, Errors, and Dreams</a></li>
<li>Andrej Karpathy X post on <a href="https://x.com/karpathy/status/1733299213503787018?lang=en">8 December 2023</a></li>
</ul>
</article>
<div class="col-3">
Expand Down
6 changes: 6 additions & 0 deletions recommended-resources-for-further-learning/sources.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,3 +83,9 @@
- [Generative AI Has a Visual Plagiarism Problem Experiments with Midjourney and DALL-E 3 show a copyright minefield](https://spectrum.ieee.org/midjourney-copyright)
- [New York Times Legal Complaint December 2023](https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf)
- [Training Generative AI Models on Copyrighted Works Is Fair Use](https://www.arl.org/blog/training-generative-ai-models-on-copyrighted-works-is-fair-use/)

### Hallucinations and Generative AI
- [A.I. Has a Measurement Problem](https://www.nytimes.com/2024/04/15/technology/ai-models-measurement.html)
- [Best Strategies to Minimize Hallucinations in LLMs: A Comprehensive Guide](https://www.turing.com/resources/minimize-llm-hallucinations-strategy)
- [Hallucinations, Errors, and Dreams](https://medium.com/@colin.fraser/hallucinations-errors-and-dreams-c281a66f3c35)
- Andrej Karpathy X post on [8 December 2023](https://x.com/karpathy/status/1733299213503787018?lang=en)

0 comments on commit 119fa49

Please sign in to comment.