Skip to content

Commit

Permalink
L23 cosmetic
Browse files Browse the repository at this point in the history
  • Loading branch information
patricklam committed Sep 16, 2024
1 parent 8f8bf06 commit 22fe292
Show file tree
Hide file tree
Showing 3 changed files with 22 additions and 22 deletions.
2 changes: 1 addition & 1 deletion lectures/459.bib
Original file line number Diff line number Diff line change
Expand Up @@ -1273,7 +1273,7 @@ @inproceedings{coz
}

@article{gptforgood,
title = {ChatGPT for good? On opportunities and challenges of large language models for education},
title = {{ChatGPT} for good? On opportunities and challenges of large language models for education},
journal = {Learning and Individual Differences},
volume = {103},
pages = {102274},
Expand Down
12 changes: 6 additions & 6 deletions lectures/L23-slides.tex
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ \part{Password Cracking}

You do not even need to make them yourself anymore (if you don't want) because you can download them on the internet... they are not hard to find.

They are large, yes, but in the 25 - 900 GB range.
They are large, yes, but in the 25--900 GB range.


\end{frame}
Expand Down Expand Up @@ -297,7 +297,7 @@ \part{Password Cracking}

\begin{itemize}
\item Table generation on a GTX295 core for MD5 proceeds at around 430M links/sec.
\item Cracking a password 'K\#n\&r4Z': real: 1m51.962s, user: 1m4.740s. sys: 0m15.320s
\item Cracking a password 'K\#n\&r4Z': \\ \qquad real: 1m51.962s, user: 1m4.740s. sys: 0m15.320s
\end{itemize}

Yikes.
Expand Down Expand Up @@ -426,7 +426,7 @@ \part{Large Language Models}

Such large language models have existed before, but ChatGPT ended up a hit because it's pretty good at being ``conversational''.

This is referred to as Natural Language Processing (NLP).
This is within the ambit of Natural Language Processing (NLP).

\end{frame}

Expand Down Expand Up @@ -700,7 +700,7 @@ \part{Large Language Models}
\begin{frame}
\frametitle{Next Idea}

We seem to be memory limited -- let's see what we can do?
We seem to be memory limited---let's see what we can do?

First idea: \alert{gradient accumulation}.

Expand Down Expand Up @@ -731,7 +731,7 @@ \part{Large Language Models}
\begin{frame}
\frametitle{I got Suspicious}

I got suspicious about the 128 dropoff in memory usage and it made me think about other indicators -- is it getting worse somehow?
I got suspicious about the 128 dropoff in memory usage and it made me think about other indicators---is it getting worse somehow?

The output talks about training loss...

Expand Down Expand Up @@ -791,7 +791,7 @@ \part{Large Language Models}
\end{tabular}
\end{center}

Interesting results -- maybe a little concerning?
Interesting results---maybe a little concerning?

\end{frame}

Expand Down
Loading

0 comments on commit 22fe292

Please sign in to comment.