Skip to content

Commit

Permalink
2024-02-08 19:23:19
Browse files Browse the repository at this point in the history
  • Loading branch information
wizardforcel committed Feb 8, 2024
1 parent fb50538 commit 02d225d
Show file tree
Hide file tree
Showing 2 changed files with 335 additions and 0 deletions.
18 changes: 18 additions & 0 deletions totrans/gen-dl_18.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,14 @@
prefs:
- PREF_H1
type: TYPE_NORMAL
zh: 第14章。结论
- en: In May 2018, I began work on the first edition of this book. Five years later,
I am more excited than ever about the endless possibilities and potential impact
of generative AI.
id: totrans-1
prefs: []
type: TYPE_NORMAL
zh: 2018年5月,我开始着手第一版这本书的工作。五年后,我对生成AI的无限可能性和潜在影响感到比以往任何时候都更加兴奋。
- en: In this time we have seen incredible progress in this field, with seemingly
limitless potential for real-world applications. I am filled with a sense of awe
and wonder at what we have been able to achieve so far and eagerly anticipate
Expand All @@ -18,13 +20,15 @@
id: totrans-2
prefs: []
type: TYPE_NORMAL
zh: 在这段时间里,我们看到了这个领域的惊人进步,对真实世界应用有着看似无限的潜力。我对我们迄今为止所取得的成就感到敬畏和惊叹,并迫不及待地期待着生成AI未来几年将对世界产生的影响。生成深度学习有能力以我们无法想象的方式塑造未来。
- en: What’s more, as I have been researching content for this book, it has become
ever clearer to me that this field isn’t just about creating images, text, or
music. I believe that at the core of generative deep learning lies the secret
of intelligence itself.
id: totrans-3
prefs: []
type: TYPE_NORMAL
zh: 此外,随着我为这本书研究内容,我越来越清楚地意识到这个领域不仅仅是关于创建图像、文本或音乐。我相信生成深度学习的核心是智能本身的秘密。
- en: The first section of this chapter summarizes how we have reached this point
in our generative AI journey. We will walk through a timeline of generative AI
developments since 2014 in chronological order, so that you can see where each
Expand All @@ -38,17 +42,20 @@
id: totrans-4
prefs: []
type: TYPE_NORMAL
zh: 本章的第一部分总结了我们在生成AI之旅中达到这一点的过程。我们将按时间顺序浏览自2014年以来的生成AI发展时间轴,以便您可以看到每种技术在生成AI历史中的位置。第二部分解释了我们目前在最先进的生成AI方面的位置。我们将讨论生成深度学习方法的当前趋势以及普通公众可以使用的当前现成模型。接下来,我们将探讨生成AI的未来以及前方的机遇和挑战。我们将考虑未来五年生成AI可能会是什么样子,以及它对社会和商业的潜在影响,并解决一些主要的伦理和实际问题。
- en: Timeline of Generative AI
id: totrans-5
prefs:
- PREF_H1
type: TYPE_NORMAL
zh: 生成AI时间轴
- en: '[Figure 14-1](#timeline) is a timeline of the key developments in generative
modeling that we have explored together in this book. The colors represent different
model types.'
id: totrans-6
prefs: []
type: TYPE_NORMAL
zh: '[图14-1](#timeline)是我们在本书中一起探索的生成建模关键发展的时间轴。颜色代表不同的模型类型。'
- en: To field of generative AI stands on the shoulders of earlier developments in
deep learning, such as backpropagation and convolutional neural networks, which
unlocked the possibility for models to learn complex relationships across large
Expand All @@ -57,42 +64,50 @@
id: totrans-7
prefs: []
type: TYPE_NORMAL
zh: 生成AI领域建立在深度学习早期发展的基础上,比如反向传播和卷积神经网络,这些技术解锁了模型在大规模数据集上学习复杂关系的可能性。在本节中,我们将研究生成AI的现代历史,从2014年开始,这一历史发展速度惊人。
- en: 'To help us understand how everything fits together, we can loosely break down
this history into three main eras:'
id: totrans-8
prefs: []
type: TYPE_NORMAL
zh: 为了帮助我们理解所有内容如何相互关联,我们可以大致将这段历史分为三个主要时代:
- en: '2014–2017: The VAE and GAN era'
id: totrans-9
prefs:
- PREF_OL
type: TYPE_NORMAL
zh: 2014年至2017年:VAE和GAN时代
- en: '2018–2019: The Transformer era'
id: totrans-10
prefs:
- PREF_OL
type: TYPE_NORMAL
zh: 2018年至2019年:变压器时代
- en: '2020–2022: The Big Model era'
id: totrans-11
prefs:
- PREF_OL
type: TYPE_NORMAL
zh: 2020年至2022年:大模型时代
- en: '![](Images/gdl2_1401.png)'
id: totrans-12
prefs: []
type: TYPE_IMG
zh: '![](Images/gdl2_1401.png)'
- en: 'Figure 14-1\. A brief history of generative AI from 2014 to 2023 (note: some
important developments such as LSTMs and early energy-based models [e.g., Boltzmann
machines] precede this timeline)'
id: totrans-13
prefs:
- PREF_H6
type: TYPE_NORMAL
zh: 图14-1。从2014年到2023年的生成AI简史(注意:一些重要的发展,如LSTM和早期基于能量的模型[例如,玻尔兹曼机]在这个时间轴之前)
- en: '2014–2017: The VAE and GAN Era'
id: totrans-14
prefs:
- PREF_H2
type: TYPE_NORMAL
zh: 2014年至2017年:VAE和GAN时代
- en: The invention of the VAE in December 2013 can perhaps be thought of as the spark
that lit the generative AI touchpaper. This paper showed how it was possible to
generate not only simple images such as MNIST digits but also more complex images
Expand All @@ -102,6 +117,7 @@
id: totrans-15
prefs: []
type: TYPE_NORMAL
zh: VAE的发明可以说是点燃生成AI火药桶的火花。这篇论文展示了不仅可以生成简单的图像,如MNIST数字,还可以生成更复杂的图像,如面孔,而且可以在一个可以平滑遍历的潜在空间中生成。2014年,GAN的引入紧随其后,这是一种全新的对抗性框架,用于解决生成建模问题。
- en: The following three years were dominated by progressively more impressive extensions
of the GAN portfolio. In addition to fundamental changes to the GAN model architecture
(DCGAN, 2015), loss function (Wasserstein GAN, 2017), and training process (ProGAN,
Expand All @@ -110,12 +126,14 @@
id: totrans-16
prefs: []
type: TYPE_NORMAL
zh: 接下来的三年被逐渐更令人印象深刻的GAN系列扩展所主导。除了对GAN模型架构(DCGAN,2015)、损失函数(Wasserstein GAN,2017)和训练过程(ProGAN,2017)的基本改变外,还使用GAN处理了新的领域,如图像到图像的转换(pix2pix,2016,和CycleGAN,2017)和音乐生成(MuseGAN,2017)。
- en: During this era, important VAE improvements were also introduced, such as VAE-GAN
(2015) and later VQ-VAE (2017), and applications to reinforcement learning were
seen in the “World Models” paper (2018).
id: totrans-17
prefs: []
type: TYPE_NORMAL
zh: 在这个时代,还引入了重要的VAE改进,如VAE-GAN(2015)和后来的VQ-VAE(2017),并且在“世界模型”论文中看到了对强化学习的应用。
- en: Established autoregressive models such as LSTMs and GRUs remained the dominant
force in text generation over this time. The same autoregressive ideas were also
being used to generate images, with PixelRNN (2016) and PixelCNN (2016) introduced
Expand Down
Loading

0 comments on commit 02d225d

Please sign in to comment.