diff --git a/totrans/prac-dl-cld_04.yaml b/totrans/prac-dl-cld_04.yaml index 7499de5..fed6e2d 100644 --- a/totrans/prac-dl-cld_04.yaml +++ b/totrans/prac-dl-cld_04.yaml @@ -1928,11 +1928,14 @@ prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图4-22。使用从音频预测的潜在因素进行预测使用模式分布的t-SNE可视化(图片来源:“深度基于内容的音乐推荐”由Aaron van den Oord,Sander + Dieleman,Benjamin Schrauwen,NIPS 2013) - en: Image Captioning id: totrans-275 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 图像字幕 - en: Image captioning is the science of translating an image into a sentence (as illustrated in [Figure 4-23](part0006.html#image_captioning_feature_in_seeing_aicol)). Going beyond just object tagging, this requires a deeper visual understanding @@ -1945,17 +1948,21 @@ id: totrans-276 prefs: [] type: TYPE_NORMAL + zh: 图像字幕是将图像翻译成句子的科学(如[图4-23](part0006.html#image_captioning_feature_in_seeing_aicol)所示)。这不仅仅是物体标记,还需要对整个图像和物体之间的关系有更深入的视觉理解。为了训练这些模型,一个名为MS + COCO的开源数据集于2014年发布,其中包括超过30万张图像,以及物体类别、句子描述、视觉问答对和物体分割。它作为每年竞赛的基准,用于观察图像字幕、物体检测和分割的进展。 - en: '![Image captioning feature in Seeing AI: the Talking Camera App for the blind community](../images/00309.jpeg)' id: totrans-277 prefs: [] type: TYPE_IMG + zh: '![Seeing AI中的图像字幕功能:盲人社区的Talking Camera App](../images/00309.jpeg)' - en: 'Figure 4-23\. Image captioning feature in Seeing AI: the Talking Camera App for the blind community' id: totrans-278 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图4-23。Seeing AI中的图像字幕功能:盲人社区的Talking Camera App - en: A common strategy applied in the first year of the challenge (2015) was to append a language model (LSTM/RNN) with a CNN in such a way that the output of a CNN feature vector is taken as the input to the language model (LSTM/RNN). This combined @@ -1970,6 +1977,7 @@ id: totrans-279 prefs: [] type: TYPE_NORMAL + zh: 在挑战的第一年(2015年)中应用的一种常见策略是将语言模型(LSTM/RNN)与CNN结合起来,以使CNN特征向量的输出作为语言模型(LSTM/RNN)的输入。这种组合模型以端到端的方式联合训练,取得了令人印象深刻的结果,震惊了世界。尽管每个研究实验室都在努力超越对方,但后来发现进行简单的最近邻搜索可以产生最先进的结果。对于给定的图像,根据嵌入的相似性找到相似的图像。然后,注意相似图像字幕中的共同词,并打印包含最常见词的字幕。简而言之,懒惰的方法仍然能击败最先进的方法,这暴露了数据集中的一个关键偏见。 - en: 'This bias has been coined the *Giraffe-Tree* problem by Larry Zitnick. Do an image search for “giraffe” on a search engine. Look closely: in addition to giraffe, is there grass in almost every image? Chances are you can describe the majority @@ -1983,12 +1991,14 @@ id: totrans-280 prefs: [] type: TYPE_NORMAL + zh: 这种偏见被Larry Zitnick称为*长颈鹿-树*问题。在搜索引擎上搜索“长颈鹿”进行图像搜索。仔细观察:除了长颈鹿,几乎每张图像中都有草吗?很有可能你可以将这些图像的大多数描述为“一只长颈鹿站在草地上”。同样,如果查询图像(如[图4-24](part0006.html#the_giraffe-tree_problem_left_parenthesi)中最左边的照片)包含一只长颈鹿和一棵树,几乎所有相似的图像(右边)都可以描述为“一只长颈鹿站在草地上,旁边有一棵树”。即使没有对图像有更深入的理解,也可以通过简单的最近邻搜索得出正确的字幕。这表明为了衡量系统的真正智能,我们需要在测试集中加入更多语义上新颖/原创的图像。 - en: '![The Giraffe-Tree problem (image source: Measuring Machine Intelligence Through Visual Question Answering, C. Lawrence Zitnick, Aishwarya Agrawal, Stanislaw Antol, Margaret Mitchell, Dhruv Batra, Devi Parikh)](../images/00268.jpeg)' id: totrans-281 prefs: [] type: TYPE_IMG + zh: '![长颈鹿-树问题(图片来源:通过视觉问答测量机器智能,C.劳伦斯·齐特尼克,艾丝瓦里亚·阿格拉瓦尔,斯坦尼斯洛夫·安托尔,玛格丽特·米切尔,德鲁夫·巴特拉,黛薇·帕里克)](../images/00268.jpeg)' - en: 'Figure 4-24\. The Giraffe-Tree problem (image source: Measuring Machine Intelligence Through Visual Question Answering, C. Lawrence Zitnick, Aishwarya Agrawal, Stanislaw Antol, Margaret Mitchell, Dhruv Batra, Devi Parikh)' @@ -1996,15 +2006,18 @@ prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图4-24。长颈鹿-树问题(图片来源:通过视觉问答测量机器智能,C.劳伦斯·齐特尼克,艾丝瓦里亚·阿格拉瓦尔,斯坦尼斯洛夫·安托尔,玛格丽特·米切尔,德鲁夫·巴特拉,黛薇·帕里克) - en: In short, don’t underestimate a simple nearest-neighbor approach! id: totrans-283 prefs: [] type: TYPE_NORMAL + zh: 简而言之,不要低估简单的最近邻方法! - en: Summary id: totrans-284 prefs: - PREF_H1 type: TYPE_NORMAL + zh: 总结 - en: Now we are at the end of a successful expedition where we explored locating similar images with the help of embeddings. We took this one level further by exploring how to scale searches from a few thousand to a few billion documents diff --git a/totrans/prac-dl-cld_05.yaml b/totrans/prac-dl-cld_05.yaml index 9a6d9f8..7e4407f 100644 --- a/totrans/prac-dl-cld_05.yaml +++ b/totrans/prac-dl-cld_05.yaml @@ -1,8 +1,10 @@ - en: 'Chapter 5\. From Novice to Master Predictor: Maximizing Convolutional Neural Network Accuracy' + id: totrans-0 prefs: - PREF_H1 type: TYPE_NORMAL + zh: 第5章。从新手到大师预测者:最大化卷积神经网络准确性 - en: In [Chapter 1](part0003.html#2RHM3-13fa565533764549a6f0ab7f11eed62b), we looked at the importance of responsible AI development. One of the aspects we discussed was the importance of robustness of our models. Users can trust what we build @@ -12,8 +14,10 @@ But it would be dangerous for a self-driving car to misclassify a pedestrian as a street lane. The main goal of this chapter is thus a rather important one—to build more accurate models. + id: totrans-1 prefs: [] type: TYPE_NORMAL + zh: 在[第1章](part0003.html#2RHM3-13fa565533764549a6f0ab7f11eed62b)中,我们探讨了负责任的人工智能开发的重要性。我们讨论的一个方面是我们模型的稳健性的重要性。用户只有在能够确信他们在日常生活中遇到的人工智能是准确可靠的情况下,才能信任我们构建的内容。显然,应用的背景非常重要。食物分类器偶尔将意大利面误分类为面包可能没问题。但是对于自动驾驶汽车将行人误认为街道车道就很危险。因此,本章的主要目标是构建更准确的模型。 - en: In this chapter, you will develop an intuition for recognizing opportunities to improve your model’s accuracy the next time you begin training one. We first look at the tools that will ensure that you won’t be going in blind. After that, @@ -23,138 +27,202 @@ is all aggregated in a single Jupyter Notebook, along with an actionable checklist with interactive examples. It is meant to be highly reusable should you choose to incorporate it in your next training script. + id: totrans-2 prefs: [] type: TYPE_NORMAL + zh: 在本章中,您将培养一种直觉,以识别下次开始训练时改进模型准确性的机会。我们首先看一下确保您不会盲目进行的工具。之后,在本章的大部分时间里,我们采取了一种非常实验性的方法,建立基线,隔离要调整的单个参数,并观察它们对模型性能和训练速度的影响。本章中使用的许多代码都汇总在一个Jupyter + Notebook中,还附有一个可交互的示例清单。如果您选择将其纳入下次训练脚本中,它应该是非常可重用的。 - en: 'We explore several questions that tend to come up during model training:' + id: totrans-3 prefs: [] type: TYPE_NORMAL + zh: 我们探讨了在模型训练过程中经常出现的几个问题: - en: I am unsure whether to use transfer learning or building from scratch to train my own network. What is the preferred approach for my scenario? + id: totrans-4 prefs: - PREF_UL type: TYPE_NORMAL + zh: 我不确定是使用迁移学习还是从头开始构建来训练自己的网络。对于我的情况,哪种方法更好? - en: What is the least amount of data that I can supply to my training pipeline to get acceptable results? + id: totrans-5 prefs: - PREF_UL type: TYPE_NORMAL + zh: 我可以提供的最少数据量是多少,以获得可接受的结果? - en: I want to ensure that the model is learning the correct thing and not picking up spurious correlations. How can I get visibility into that? + id: totrans-6 prefs: - PREF_UL type: TYPE_NORMAL + zh: 我想确保模型正在学习正确的内容,而不是获取虚假相关性。我如何能够看到这一点? - en: How can I ensure that I (or someone else) will obtain the same results from my experiments every single time they are run? In other words, how do I ensure reproducibility of my experiments? + id: totrans-7 prefs: - PREF_UL type: TYPE_NORMAL + zh: 如何确保我(或其他人)每次运行实验时都能获得相同的结果?换句话说,如何确保我的实验可重复性? - en: Does changing the aspect ratio of the input images have an impact on the predictions? + id: totrans-8 prefs: - PREF_UL type: TYPE_NORMAL + zh: 改变输入图像的长宽比是否会影响预测结果? - en: Does reducing input image size have a significant effect on prediction results? + id: totrans-9 prefs: - PREF_UL type: TYPE_NORMAL + zh: 减少输入图像大小是否会对预测结果产生显著影响? - en: If I use transfer learning, what percentage of layers should I fine tune to achieve my preferred balance of training time versus accuracy? + id: totrans-10 prefs: - PREF_UL type: TYPE_NORMAL + zh: 如果我使用迁移学习,应该微调多少层才能实现我偏好的训练时间与准确性的平衡? - en: Alternatively, if I were to train from scratch, how many layers should I have in my model? + id: totrans-11 prefs: - PREF_UL type: TYPE_NORMAL + zh: 或者,如果我从头开始训练,我的模型应该有多少层? - en: What is the appropriate “learning rate” to supply during model training? + id: totrans-12 prefs: - PREF_UL type: TYPE_NORMAL + zh: 在模型训练期间提供适当的“学习率”是多少? - en: There are too many things to remember. Is there a way to automate all of this work? + id: totrans-13 prefs: - PREF_UL type: TYPE_NORMAL + zh: 有太多事情需要记住。有没有一种方法可以自动化所有这些工作? - en: We will try to answer these questions one by one in the form of experiments on a few datasets. Ideally, you should be able to look at the results, read the takeaways, and gain some insight into the concept that the experiment was testing. If you’re feeling more adventurous, you can choose to perform the experiments yourself using the Jupyter Notebook. + id: totrans-14 prefs: [] type: TYPE_NORMAL + zh: 我们将尝试通过对几个数据集进行实验来逐一回答这些问题。理想情况下,您应该能够查看结果,阅读要点,并对实验所测试的概念有所了解。如果您感到更有冒险精神,您可以选择使用Jupyter + Notebook自行进行实验。 - en: Tools of the Trade + id: totrans-15 prefs: - PREF_H1 type: TYPE_NORMAL + zh: 行业工具 - en: 'One of the main priorities of this chapter is to reduce the code and effort involved during experimentation while trying to gain insights into the process in order to reach high accuracy. An arsenal of tools exists that can assist us in making this journey more pleasant:' + id: totrans-16 prefs: [] type: TYPE_NORMAL + zh: 本章的主要重点之一是在试图获得高准确性的同时,减少实验过程中涉及的代码和工作量。存在一系列工具可以帮助我们使这个过程更加愉快: - en: TensorFlow Datasets + id: totrans-17 prefs: [] type: TYPE_NORMAL + zh: TensorFlow数据集 - en: Quick and easy access to around 100 datasets in a performant manner. All well-known datasets are available starting from the smallest MNIST (a few megabytes) to the largest MS COCO, ImageNet, and Open Images (several hundred gigabytes). Additionally, medical datasets like the Colorectal Histology and Diabetic Retinopathy are also available. + id: totrans-18 prefs: [] type: TYPE_NORMAL + zh: 快速便捷地访问大约100个数据集,性能良好。所有知名数据集都可用,从最小的MNIST(几兆字节)到最大的MS COCO、ImageNet和Open Images(数百吉字节)。此外,还提供医学数据集,如结直肠组织学和糖尿病视网膜病变。 - en: TensorBoard + id: totrans-19 prefs: [] type: TYPE_NORMAL + zh: TensorBoard - en: Close to 20 easy-to-use methods to visualize many aspects of training, including visualizing the graph, tracking experiments, and inspecting the images, text, and audio data that pass through the network during training. + id: totrans-20 prefs: [] type: TYPE_NORMAL + zh: 近20种易于使用的方法,可视化训练的许多方面,包括可视化图形、跟踪实验以及检查通过网络的图像、文本和音频数据。 - en: What-If Tool + id: totrans-21 prefs: [] type: TYPE_NORMAL + zh: What-If工具 - en: Run experiments in parallel on separate models and tease out differences in them by comparing their performance on specific data points. Edit individual data points to see how that affects the model training. + id: totrans-22 prefs: [] type: TYPE_NORMAL + zh: 在单独的模型上并行运行实验,并通过比较它们在特定数据点上的性能来揭示它们之间的差异。编辑单个数据点以查看它如何影响模型训练。 - en: tf-explain + id: totrans-23 prefs: [] type: TYPE_NORMAL + zh: tf-explain - en: Analyze decisions made by the network to identify bias and inaccuracies in the dataset. Additionally, use heatmaps to visualize what parts of the image the network activated on. + id: totrans-24 prefs: [] type: TYPE_NORMAL + zh: 分析网络所做的决策,以识别数据集中的偏见和不准确性。此外,使用热图可视化网络在图像的哪些部分激活。 - en: Keras Tuner + id: totrans-25 prefs: [] type: TYPE_NORMAL + zh: Keras调谐器 - en: A library built for `tf.keras` that enables automatic tuning of hyperparameters in TensorFlow 2.0. + id: totrans-26 prefs: [] type: TYPE_NORMAL + zh: 一个为`tf.keras`构建的库,可以在TensorFlow 2.0中自动调整超参数。 - en: AutoKeras + id: totrans-27 prefs: [] type: TYPE_NORMAL + zh: AutoKeras - en: Automates Neural Architecture Search (NAS) across different tasks like image, text, and audio classification and image detection. + id: totrans-28 prefs: [] type: TYPE_NORMAL + zh: 自动化神经架构搜索(NAS),跨不同任务进行图像、文本和音频分类以及图像检测。 - en: AutoAugment + id: totrans-29 prefs: [] type: TYPE_NORMAL + zh: 自动增强 - en: Utilizes reinforcement learning to improve the amount and diversity of data in an existing training dataset, thereby increasing accuracy. + id: totrans-30 prefs: [] type: TYPE_NORMAL + zh: 利用强化学习来改进现有训练数据集中的数据量和多样性,从而提高准确性。 - en: Let’s now explore these tools in greater detail. + id: totrans-31 prefs: [] type: TYPE_NORMAL + zh: 现在让我们更详细地探索这些工具。 - en: TensorFlow Datasets + id: totrans-32 prefs: - PREF_H2 type: TYPE_NORMAL + zh: TensorFlow数据集 - en: TensorFlow Datasets is a collection of nearly 100 ready-to-use datasets that can quickly help build high-performance input data pipelines for training TensorFlow models. Instead of downloading and manipulating data sets manually and then figuring @@ -164,42 +232,62 @@ down into training, validation, and testing is also a matter of a single line of code. We will additionally be exploring TensorFlow Datasets from a performance point of view in the next chapter. + id: totrans-33 prefs: [] type: TYPE_NORMAL + zh: TensorFlow数据集是一个近100个准备就绪数据集的集合,可以快速帮助构建用于训练TensorFlow模型的高性能输入数据管道。TensorFlow数据集标准化了数据格式,使得很容易用另一个数据集替换一个数据集,通常只需更改一行代码。正如您将在后面看到的,将数据集分解为训练、验证和测试也只需一行代码。我们还将在下一章从性能的角度探索TensorFlow数据集。 - en: 'You can list all of the available datasets by using the following command (in the interest of conserving space, only a small subset of the full output is shown in this example):' + id: totrans-34 prefs: [] type: TYPE_NORMAL + zh: 您可以使用以下命令列出所有可用数据集(为了节省空间,此示例中仅显示了完整输出的一小部分): - en: '[PRE0]' + id: totrans-35 prefs: [] type: TYPE_PRE + zh: '[PRE0]' - en: '[PRE1]' + id: totrans-36 prefs: [] type: TYPE_PRE + zh: '[PRE1]' - en: 'Let’s see how simple it is to load a dataset. We will plug this into a full working pipeline later:' + id: totrans-37 prefs: [] type: TYPE_NORMAL + zh: 让我们看看加载数据集有多简单。稍后我们将把这个插入到一个完整的工作流程中: - en: '[PRE2]' + id: totrans-38 prefs: [] type: TYPE_PRE + zh: '[PRE2]' - en: Tip + id: totrans-39 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 提示 - en: '`tfds` generates a lot of progress bars, and they take up a lot of screen space—using `tfds.disable_progress_bar()` might be a good idea.' + id: totrans-40 prefs: [] type: TYPE_NORMAL + zh: '`tfds`生成了很多进度条,它们占用了很多屏幕空间——使用`tfds.disable_progress_bar()`可能是一个好主意。' - en: TensorBoard + id: totrans-41 prefs: - PREF_H2 type: TYPE_NORMAL + zh: TensorBoard - en: TensorBoard is a one-stop-shop for all of your visualization needs, offering close to 20 tools to understand, inspect, and improve your model’s training. + id: totrans-42 prefs: [] type: TYPE_NORMAL + zh: TensorBoard是您可视化需求的一站式服务,提供近20种工具来理解、检查和改进模型的训练。 - en: Traditionally, to track experiment progress, we save the values of loss and accuracy per epoch and then, when done, plot it using `matplotlib`. The downside with that approach is that it’s not real time. Our usual options are to watch @@ -210,103 +298,155 @@ to assist in understanding the progression of training. Another benefit it offers is the ability to compare our current experiment’s progress with the previous experiment, so we can see how a change in parameters affected our overall accuracy. + id: totrans-43 prefs: [] type: TYPE_NORMAL + zh: 传统上,为了跟踪实验进展,我们保存每个时代的损失和准确性值,然后在完成时使用`matplotlib`绘制。这种方法的缺点是它不是实时的。我们通常的选择是观察文本中的训练进度。此外,在训练完成后,我们需要编写额外的代码来在`matplotlib`中制作图表。TensorBoard通过提供实时仪表板([图5-1](part0007.html#tensorboard_default_view_showcasing_real))来解决这些问题以及更多紧迫问题,帮助我们可视化所有日志(如训练/验证准确性和损失),以帮助理解训练的进展。它提供的另一个好处是能够比较当前实验的进展与上一个实验,这样我们就可以看到参数的变化如何影响我们的整体准确性。 - en: '![TensorBoard default view showcasing real-time training metrics (the lightly shaded lines represent the accuracy from the previous run)](../images/00226.jpeg)' + id: totrans-44 prefs: [] type: TYPE_IMG + zh: '![TensorBoard默认视图展示实时训练指标(浅色线表示上一次运行的准确性)](../images/00226.jpeg)' - en: Figure 5-1\. TensorBoard default view showcasing real-time training metrics (the lightly shaded lines represent the accuracy from the previous run) + id: totrans-45 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-1。TensorBoard默认视图展示实时训练指标(浅色线表示上一次运行的准确性) - en: 'To enable TensorBoard to visualize our training and models, we need to log information about our training with the help of summary writer:' + id: totrans-46 prefs: [] type: TYPE_NORMAL + zh: 为了使TensorBoard能够可视化我们的训练和模型,我们需要使用摘要写入器记录有关我们的训练的信息: - en: '[PRE3]' + id: totrans-47 prefs: [] type: TYPE_PRE + zh: '[PRE3]' - en: 'To follow our training in real time, we need to load TensorBoard before the model training begins. We can load TensorBoard by using the following commands:' + id: totrans-48 prefs: [] type: TYPE_NORMAL + zh: 要实时跟踪我们的训练,我们需要在模型训练开始之前加载TensorBoard。我们可以使用以下命令加载TensorBoard: - en: '[PRE4]' + id: totrans-49 prefs: [] type: TYPE_PRE + zh: '[PRE4]' - en: As more TensorFlow components need a visual user interface, they reuse TensorBoard by becoming embeddable plug-ins within it. You’ll notice the Inactive drop-down menu on TensorBoard; that’s where you can see all the different profiles or tools that TensorFlow offers. [Table 5-1](part0007.html#plugins_for_tensorboarddot) showcases a handful of the wide variety of tools available. + id: totrans-50 prefs: [] type: TYPE_NORMAL + zh: 随着更多TensorFlow组件需要可视化用户界面,它们通过成为可嵌入插件在TensorBoard中重用TensorBoard。您会注意到TensorBoard上的非活动下拉菜单;那里您可以看到TensorFlow提供的所有不同配置文件或工具。[表5-1](part0007.html#plugins_for_tensorboarddot)展示了各种可用工具中的一小部分。 - en: Table 5-1\. Plugins for TensorBoard + id: totrans-51 prefs: [] type: TYPE_NORMAL + zh: 表5-1\. TensorBoard的插件 - en: '| **TensorBoard plug-in name** | **Description** |' + id: totrans-52 prefs: [] type: TYPE_TB + zh: '| **TensorBoard插件名称** | **描述** |' - en: '| --- | --- |' + id: totrans-53 prefs: [] type: TYPE_TB + zh: '| --- | --- |' - en: '| Default Scalar | Visualize scalar values such as classification accuracy. |' + id: totrans-54 prefs: [] type: TYPE_TB + zh: '| 默认标量 | 可视化标量值,如分类准确度。 |' - en: '| Custom Scalar | Visualize user-defined custom metrics. For example, different weights for different classes, which might not be a readily available metric. |' + id: totrans-55 prefs: [] type: TYPE_TB + zh: '| 自定义标量 | 可视化用户定义的自定义指标。例如,不同类别的不同权重,这可能不是一个现成的指标。 |' - en: '| Image | View the output from each layer by clicking the Images tab. |' + id: totrans-56 prefs: [] type: TYPE_TB + zh: '| 图像 | 通过点击图像选项卡查看每个层的输出。 |' - en: '| Audio | Visualize audio data. |' + id: totrans-57 prefs: [] type: TYPE_TB + zh: '| 音频 | 可视化音频数据。 |' - en: '| Debugging tools | Allows debugging visually and setting conditional breakpoints (e.g., tensor contains Nan or Infinity). |' + id: totrans-58 prefs: [] type: TYPE_TB + zh: '| 调试工具 | 允许可视化调试并设置条件断点(例如,张量包含NaN或无穷大)。 |' - en: '| Graphs | Shows the model architecture graphically. |' + id: totrans-59 prefs: [] type: TYPE_TB + zh: '| 图表 | 以图形方式显示模型架构。 |' - en: '| Histograms | Show the changes in the weight distribution in the layers of a model as the training progresses. This is especially useful for checking the effect of compressing a model with quantization. |' + id: totrans-60 prefs: [] type: TYPE_TB + zh: '| 直方图 | 显示模型各层的权重分布随训练进展的变化。这对于检查使用量化压缩模型的效果特别有用。 |' - en: '| Projector | Visualize projections using t-SNE, PCA, and others. |' + id: totrans-61 prefs: [] type: TYPE_TB + zh: '| Projector | 使用t-SNE、PCA等可视化投影。 |' - en: '| Text | Visualize text data. |' + id: totrans-62 prefs: [] type: TYPE_TB + zh: '| 文本 | 可视化文本数据。 |' - en: '| PR curves | Plot precision-recall curves. |' + id: totrans-63 prefs: [] type: TYPE_TB + zh: '| PR 曲线 | 绘制精确率-召回率曲线。 |' - en: '| Profile | Benchmark speed of all operations and layers in a model. |' + id: totrans-64 prefs: [] type: TYPE_TB + zh: '| 概要 | 对模型中所有操作和层的速度进行基准测试。 |' - en: '| Beholder | Visualize the gradients and activations of a model in real time during training. It allows seeing them filter by filter, and allows them to be exported as images or even as a video. |' + id: totrans-65 prefs: [] type: TYPE_TB + zh: '| Beholder | 实时训练过程中可视化模型的梯度和激活。它允许按滤波器查看它们,并允许将它们导出为图像甚至视频。 |' - en: '| What-If Tool | For investigating the model by slicing and dicing the data and checking its performance. Especially helpful for discovering bias. |' + id: totrans-66 prefs: [] type: TYPE_TB + zh: '| What-If 工具 | 通过切片和切块数据以及检查其性能来调查模型。特别有助于发现偏见。 |' - en: '| HParams | Find out which params and at what values are the most important, allow logging of the entire parameter server (discussed in detail in this chapter). |' + id: totrans-67 prefs: [] type: TYPE_TB + zh: '| HParams | 查找哪些参数以及以什么值最重要,允许记录整个参数服务器(在本章中详细讨论)。 |' - en: '| Mesh | Visualize 3D data (including point clouds). |' + id: totrans-68 prefs: [] type: TYPE_TB + zh: '| 网格 | 可视化3D数据(包括点云)。 |' - en: It should be noted that TensorBoard is not TensorFlow specific, and can be used with other frameworks like PyTorch, scikit-learn, and more, depending on the plugin used. To make a plugin work, we need to write the specific metadata that we want @@ -316,21 +456,31 @@ calling TensorBoard, we need to write the metadata like the feature embeddings of our image, so that TensorFlow Projector can use it to do clustering, as demonstrated in [Figure 5-2](part0007.html#tensorflow_embedding_projector_showcasin). + id: totrans-69 prefs: [] type: TYPE_NORMAL + zh: 值得注意的是,TensorBoard并不是特定于TensorFlow的,可以与其他框架如PyTorch、scikit-learn等一起使用,具体取决于所使用的插件。要使插件工作,我们需要编写要可视化的特定元数据。例如,TensorBoard将TensorFlow + Projector工具嵌入其中,以使用t-SNE对图像、文本或音频进行聚类(我们在[第4章](part0006.html#5N3C3-13fa565533764549a6f0ab7f11eed62b)中详细讨论过)。除了调用TensorBoard外,我们还需要编写像图像的特征嵌入这样的元数据,以便TensorFlow + Projector可以使用它来进行聚类,如[图5-2](part0007.html#tensorflow_embedding_projector_showcasin)中所示。 - en: '![TensorFlow Embedding Projector showcasing data in clusters, can be run as a TensorBoard plug-in](../images/00190.jpeg)' + id: totrans-70 prefs: [] type: TYPE_IMG + zh: '![TensorFlow嵌入项目展示数据聚类,可以作为TensorBoard插件运行](../images/00190.jpeg)' - en: Figure 5-2\. TensorFlow Embedding Projector showcasing data in clusters (can be run as a TensorBoard plugin) + id: totrans-71 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-2\. TensorFlow嵌入项目展示数据聚类(可以作为TensorBoard插件运行) - en: What-If Tool + id: totrans-72 prefs: - PREF_H2 type: TYPE_NORMAL + zh: What-If工具 - en: What if we could inspect our AI model’s predictions with the help of visualizations? What if we could find the best threshold for our model to maximize precision and recall? What if we could slice and dice the data along with the predictions our @@ -341,143 +491,216 @@ and [Figure 5-4](part0007.html#pr_curves_in_the_performance_and_fairnes)) from Google’s People + AI Research (PAIR) initiative helps open up the black box of AI models to enable model and data explainability. + id: totrans-73 prefs: [] type: TYPE_NORMAL + zh: 如果我们能够通过可视化来检查我们的AI模型的预测结果会怎么样?如果我们能够找到最佳阈值来最大化精确度和召回率会怎么样?如果我们能够通过切片和切块数据以及我们的模型所做的预测来看到它擅长的地方以及有哪些改进机会会怎么样?如果我们能够比较两个模型以找出哪个确实更好会怎么样?如果我们能够通过在浏览器中点击几下就能做到所有这些以及更多呢?听起来肯定很吸引人!Google的People + + AI Research(PAIR)倡议中的What-If工具([图5-3](part0007.html#what-if_toolapostrophes_datapoint_editor)和[图5-4](part0007.html#pr_curves_in_the_performance_and_fairnes))帮助打开AI模型的黑匣子,实现模型和数据的可解释性。 - en: '![What-If Tool’s datapoint editor makes it possible to filter and visualize data according to annotations of the dataset and labels from the classifier](../images/00152.jpeg)' + id: totrans-74 prefs: [] type: TYPE_IMG + zh: '![What-If工具的数据点编辑器使根据数据集的注释和分类器的标签对数据进行过滤和可视化成为可能](../images/00152.jpeg)' - en: Figure 5-3\. What-If Tool’s datapoint editor makes it possible to filter and visualize data according to annotations of the dataset and labels from the classifier + id: totrans-75 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-3\. What-If工具的数据点编辑器使根据数据集的注释和分类器的标签对数据进行过滤和可视化成为可能 - en: '![PR curves in the Performance and Fairness section of the What-If Tool help to interactively select the optimal threshold to maximize precision and recall](../images/00018.jpeg)' + id: totrans-76 prefs: [] type: TYPE_IMG + zh: '![在What-If工具的性能和公平性部分中的PR曲线帮助交互式选择最佳阈值以最大化精确度和召回率](../images/00018.jpeg)' - en: Figure 5-4\. PR curves in the Performance and Fairness section of the What-If Tool help to interactively select the optimal threshold to maximize precision and recall + id: totrans-77 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-4。在What-If工具的性能和公平性部分中的PR曲线帮助交互式地选择最佳阈值以最大化精度和召回率 - en: 'To use the What-If Tool, we need the dataset and a model. As we just saw, TensorFlow Datasets makes downloading and loading the data (in the `tfrecord` format) relatively easy. All we need to do is to locate the data file. Additionally, we want to save the model in the same directory:' + id: totrans-78 prefs: [] type: TYPE_NORMAL + zh: 要使用What-If工具,我们需要数据集和一个模型。正如我们刚才看到的,TensorFlow Datasets使得下载和加载数据(以`tfrecord`格式)相对容易。我们只需要定位数据文件即可。此外,我们希望将模型保存在同一目录中: - en: '[PRE5]' + id: totrans-79 prefs: [] type: TYPE_PRE + zh: '[PRE5]' - en: It’s best to perform the following lines of code in a local system rather than a Colab notebook because the integration between Colab and the What-If Tool is still evolving. + id: totrans-80 prefs: [] type: TYPE_NORMAL + zh: 最好在本地系统中执行以下代码行,而不是在Colab笔记本中,因为Colab和What-If工具之间的集成仍在不断发展。 - en: 'Let’s start TensorBoard:' + id: totrans-81 prefs: [] type: TYPE_NORMAL + zh: 让我们开始TensorBoard: - en: '[PRE6]' + id: totrans-82 prefs: [] type: TYPE_PRE + zh: '[PRE6]' - en: 'Now, in a new terminal, let’s make a directory for all of our What-If Tool experiments:' + id: totrans-83 prefs: [] type: TYPE_NORMAL + zh: 现在,在一个新的终端中,让我们为所有的What-If工具实验创建一个目录: - en: '[PRE7]' + id: totrans-84 prefs: [] type: TYPE_PRE + zh: '[PRE7]' - en: 'Move the trained model and TFRecord data here. The overall directory structure looks something like this:' + id: totrans-85 prefs: [] type: TYPE_NORMAL + zh: 将训练好的模型和TFRecord数据移动到这里。整体目录结构看起来像这样: - en: '[PRE8]' + id: totrans-86 prefs: [] type: TYPE_PRE + zh: '[PRE8]' - en: 'We’ll serve the model using Docker within the newly created directory:' + id: totrans-87 prefs: [] type: TYPE_NORMAL + zh: 我们将在新创建的目录中使用Docker来提供模型: - en: '[PRE9]' + id: totrans-88 prefs: [] type: TYPE_PRE + zh: '[PRE9]' - en: 'A word of caution: the port must be `8500` and all parameters must be spelled exactly as shown in the preceding example.' + id: totrans-89 prefs: [] type: TYPE_NORMAL + zh: 一句警告:端口必须是`8500`,所有参数必须与前面的示例中显示的完全相同。 - en: Next, at the far right, click the settings button (the gray gear icon) and add the values listed in [Table 5-2](part0007.html#configurations_for_the_what-if_tool). + id: totrans-90 prefs: [] type: TYPE_NORMAL + zh: 接下来,在最右侧,点击设置按钮(灰色的齿轮图标),并添加[表5-2](part0007.html#configurations_for_the_what-if_tool)中列出的值。 - en: Table 5-2\. Configurations for the What-If Tool + id: totrans-91 prefs: [] type: TYPE_NORMAL + zh: 表5-2。What-If工具的配置 - en: '| **Parameter** | **Value** |' + id: totrans-92 prefs: [] type: TYPE_TB + zh: '| **参数** | **值** |' - en: '| --- | --- |' + id: totrans-93 prefs: [] type: TYPE_TB + zh: '| --- | --- |' - en: '| Inference address | `ip_addr:8500` |' + id: totrans-94 prefs: [] type: TYPE_TB + zh: '| 推断地址 | `ip_addr:8500` |' - en: '| Model name | `/models/colo` |' + id: totrans-95 prefs: [] type: TYPE_TB + zh: '| 模型名称 | `/models/colo` |' - en: '| Model type | Classification |' + id: totrans-96 prefs: [] type: TYPE_TB + zh: '| 模型类型 | 分类 |' - en: '| Path to examples | */home/{*`your_username`*}/what_if_stuff/colo/models/colo.tfrec* (Note: this must be an absolute path) |' + id: totrans-97 prefs: [] type: TYPE_TB + zh: '| 示例路径 | */home/{*`your_username`*}/what_if_stuff/colo/models/colo.tfrec*(注意:这必须是绝对路径)|' - en: We can now open the What-If Tool in the browser within TensorBoard, as depicted in [Figure 5-5](part0007.html#setup_window_for_the_what-if_tool). + id: totrans-98 prefs: [] type: TYPE_NORMAL + zh: 我们现在可以在TensorBoard中的浏览器中打开What-If工具,如[图5-5](part0007.html#setup_window_for_the_what-if_tool)所示。 - en: '![Setup window for the What-If Tool](../images/00067.jpeg)' + id: totrans-99 prefs: [] type: TYPE_IMG + zh: '![What-If工具的设置窗口](../images/00067.jpeg)' - en: Figure 5-5\. Setup window for the What-If Tool + id: totrans-100 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-5。What-If工具的设置窗口 - en: The What-If Tool can also be used to visualize datasets according to different bins, as shown in [Figure 5-6](part0007.html#the_what-if_tool_enables_using_multiple). We can also use the tool to determine the better performing model out of multiple models on the same dataset using the `set_compare_estimator_and_feature_spec` function. + id: totrans-101 prefs: [] type: TYPE_NORMAL + zh: What-If工具还可以根据不同的分组对数据集进行可视化,如[图5-6](part0007.html#the_what-if_tool_enables_using_multiple)所示。我们还可以使用该工具通过`set_compare_estimator_and_feature_spec`函数确定在同一数据集上多个模型中表现更好的模型。 - en: '[PRE10]' + id: totrans-102 prefs: [] type: TYPE_PRE + zh: '[PRE10]' - en: '![The What-If tool enables using multiple metrics, data visualization, and many more things under the sun](../images/00025.jpeg)' + id: totrans-103 prefs: [] type: TYPE_IMG + zh: '![What-If工具可以使用多个指标、数据可视化等等](../images/00025.jpeg)' - en: Figure 5-6\. The What-If tool enables using multiple metrics, data visualization, and many more things under the sun + id: totrans-104 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-6。What-If工具可以使用多个指标、数据可视化等等 - en: Now, we can load TensorBoard, and then, in the Visualize section, choose the model we want to compare, as shown in [Figure 5-7](part0007.html#choose_the_model_to_compare_using_the_wh). This tool has many features to explore! + id: totrans-105 prefs: [] type: TYPE_NORMAL + zh: 现在,我们可以加载TensorBoard,然后在可视化部分选择我们想要比较的模型,如[图5-7](part0007.html#choose_the_model_to_compare_using_the_wh)所示。这个工具有很多功能可以探索! - en: '![Choose the model to compare using the What-If Tool](../images/00313.jpeg)' + id: totrans-106 prefs: [] type: TYPE_IMG + zh: '![使用What-If工具选择要比较的模型](../images/00313.jpeg)' - en: Figure 5-7\. Choose the model to compare using the What-If Tool + id: totrans-107 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-7。使用What-If工具选择要比较的模型 - en: tf-explain + id: totrans-108 prefs: - PREF_H2 type: TYPE_NORMAL + zh: tf-explain - en: Deep learning models have traditionally been black boxes, and up until now, we usually learn about their performance by watching the class probabilities and validation accuracies. To make these models more interpretable and explainable, @@ -490,8 +713,10 @@ not too robust when the classifier is put in the real world (and potentially dangerous!). Heatmaps can be especially useful to explore such bias, as often spurious correlations can seep in if the dataset is not carefully curated. + id: totrans-109 prefs: [] type: TYPE_NORMAL + zh: 传统上,深度学习模型一直是黑匣子,直到现在,我们通常通过观察类别概率和验证准确性来了解它们的性能。为了使这些模型更具可解释性和可解释性,热图应运而生。通过显示导致预测的图像区域的强度更高,热图可以帮助可视化它们的学习过程。例如,经常在雪地中看到的动物可能会得到高准确度的预测,但如果数据集中只有那种动物和雪作为背景,模型可能只会关注雪作为与动物不同的模式,而不是动物本身。这样的数据集展示了偏见,使得当分类器置于现实世界中时,预测不够稳健(甚至可能危险!)。热图可以特别有用,以探索这种偏见,因为如果数据集没有经过仔细筛选,往往会渗入虚假相关性。 - en: '`tf-explain` (by Raphael Meudec) helps understand the results and inner workings of a neural network with the help of such visualizations, removing the veil on bias in datasets. We can add multiple types of callbacks while training or use @@ -500,9 +725,11 @@ with a model into tf-explain’s functions. You must supply the object ID because `tf.explain` needs to know what is activated for that particular class. A few different visualization approaches are available with `tf.explain`:' + id: totrans-110 prefs: [] type: TYPE_NORMAL - en: Grad CAM + id: totrans-111 prefs: [] type: TYPE_NORMAL - en: The Gradient-weighted Class Activation Mapping (Grad CAM) visualizes how parts @@ -511,27 +738,34 @@ is generated based on the gradients of the object ID from the last convolutional layer. Grad CAM is largely a broad-spectrum heatmap generator given that it is robust to noise and can be used on an array of CNN models. + id: totrans-112 prefs: [] type: TYPE_NORMAL - en: Occlusion Sensitivity + id: totrans-113 prefs: [] type: TYPE_NORMAL - en: Occludes a part of the image (using a small square patch placed randomly) to establish how robust the network is. If the prediction is still correct, on average, the network is robust. The area in the image that is the warmest (i.e., red) has the most effect on the prediction when occluded. + id: totrans-114 prefs: [] type: TYPE_NORMAL - en: Activations + id: totrans-115 prefs: [] type: TYPE_NORMAL - en: Visualizes the activations for the convolutional layers. + id: totrans-116 prefs: [] type: TYPE_NORMAL - en: '![Visualizations on images using MobileNet and tf-explain](../images/00272.jpeg)' + id: totrans-117 prefs: [] type: TYPE_IMG - en: Figure 5-8\. Visualizations on images using MobileNet and tf-explain + id: totrans-118 prefs: - PREF_H6 type: TYPE_NORMAL @@ -540,21 +774,27 @@ and running tf-explain with Grad CAM and joining them together, we can build a detailed understanding of how these neural networks would react to moving camera angles. + id: totrans-119 prefs: [] type: TYPE_NORMAL - en: '[PRE11]' + id: totrans-120 prefs: [] type: TYPE_PRE + zh: '[PRE11]' - en: Common Techniques for Machine Learning Experimentation + id: totrans-121 prefs: - PREF_H1 type: TYPE_NORMAL - en: The first few chapters focused on training the model. The following sections, however, contain a few more things to keep in the back of your mind while running your training experiments. + id: totrans-122 prefs: [] type: TYPE_NORMAL - en: Data Inspection + id: totrans-123 prefs: - PREF_H2 type: TYPE_NORMAL @@ -569,16 +809,20 @@ no eyeglasses, thus uncovering bias in the data due to an unbalanced dataset. This can be solved by modifying the weights of the metrics accordingly, through the tool. + id: totrans-124 prefs: [] type: TYPE_NORMAL - en: '![Slicing and dividing the data based on predictions and real categories](../images/00231.jpeg)' + id: totrans-125 prefs: [] type: TYPE_IMG - en: Figure 5-9\. Slicing and dividing the data based on predictions and real categories + id: totrans-126 prefs: - PREF_H6 type: TYPE_NORMAL - en: 'Breaking the Data: Train, Validation, Test' + id: totrans-127 prefs: - PREF_H2 type: TYPE_NORMAL @@ -588,26 +832,35 @@ the dataset into these three parts. Some datasets already come with three default splits. Alternatively, the data can be split by percentages. The following code showcases using a default split:' + id: totrans-128 prefs: [] type: TYPE_NORMAL - en: '[PRE12]' + id: totrans-129 prefs: [] type: TYPE_PRE + zh: '[PRE12]' - en: 'The cats-and-dogs dataset in `tfds` has only the train split predefined. Similar to this, some datasets in TensorFlow Datasets do not have a `validation` split. For those datasets, we take a small percentage of samples from the predefined `training` set and treat it as the `validation` set. To top it all off, splitting the dataset using the `weighted_splits` takes care of randomizing and shuffling data between the splits:' + id: totrans-130 prefs: [] type: TYPE_NORMAL + zh: 在`tfds`中的猫狗数据集只有预定义的训练集分割。与此类似,TensorFlow数据集中的一些数据集没有`validation`分割。对于这些数据集,我们从预定义的`training`集中取一小部分样本,并将其视为`validation`集。总而言之,使用`weighted_splits`来拆分数据集可以处理在拆分之间随机化和洗牌数据: - en: '[PRE13]' + id: totrans-131 prefs: [] type: TYPE_PRE + zh: '[PRE13]' - en: Early Stopping + id: totrans-132 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 早停 - en: 'Early stopping helps to avoid overtraining of the network by keeping a lookout for the number of epochs that show limited improvement. Assuming a model is set to train for 1,000 epochs and reaches 90% accuracy at the 10^(th) epoch and stops @@ -617,15 +870,21 @@ to train. In other words, early stopping decides the point at which the training would no longer be useful and stops training. We can change the metric using the `monitor` parameter and add early stopping to our list of callbacks for the model:' + id: totrans-133 prefs: [] type: TYPE_NORMAL + zh: 早停有助于避免网络过度训练,通过监视显示有限改进的时期的数量。假设一个模型被设置为训练1,000个时期,在第10个时期达到90%的准确率,并在接下来的10个时期内不再有进一步的改进,那么继续训练可能是一种资源浪费。如果时期数超过了一个名为`patience`的预定义阈值,即使可能还有更多的时期可以训练,训练也会停止。换句话说,早停决定了训练不再有用的时刻,并停止训练。我们可以使用`monitor`参数更改指标,并将早停添加到模型的回调列表中: - en: '[PRE14]' + id: totrans-134 prefs: [] type: TYPE_PRE + zh: '[PRE14]' - en: Reproducible Experiments + id: totrans-135 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 可重现的实验 - en: 'Train a network once. Then, train it again, without changing any code or parameters. You might notice that the accuracies in two subsequent runs came out slightly different, even if no change was made in code. This is due to random variables. @@ -635,520 +894,787 @@ made reproducible by initializing a seed and that’s exactly what we will do. Various frameworks have their own ways of setting a random seed, some of which are shown here:' + id: totrans-136 prefs: [] type: TYPE_NORMAL + zh: 训练一次网络。然后,再次训练,而不更改任何代码或参数。您可能会注意到,即使在代码中没有进行任何更改,两次连续运行的准确性也略有不同。这是由于随机变量造成的。为了使实验在不同运行中可重现,我们希望控制这种随机化。模型权重的初始化、数据的随机洗牌等都利用了随机化算法。我们知道,通过初始化种子,可以使随机数生成器可重现,这正是我们要做的。各种框架都有设置随机种子的方法,其中一些如下所示: - en: '[PRE15]' + id: totrans-137 prefs: [] type: TYPE_PRE + zh: '[PRE15]' - en: Note + id: totrans-138 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 注意 - en: It is necessary to set a seed in all the frameworks and subframeworks that are being used, as seeds are not transferable between frameworks. + id: totrans-139 prefs: [] type: TYPE_NORMAL + zh: 在所有正在使用的框架和子框架中设置种子是必要的,因为种子在框架之间不可转移。 - en: End-to-End Deep Learning Example Pipeline + id: totrans-140 prefs: - PREF_H1 type: TYPE_NORMAL + zh: 端到端深度学习示例管道 - en: Let’s combine several tools and build a skeletal backbone, which will serve as our pipeline in which we will add and remove parameters, layers, functionality, and various other addons to really understand what is happening. Following the code on the book’s GitHub website (see [*http://PracticalDeepLearning.ai*](http://PracticalDeepLearning.ai)), you can interactively run this code for more than 100 datasets in your browser with Colab. Additionally, you can modify it for most classification tasks. + id: totrans-141 prefs: [] type: TYPE_NORMAL + zh: 让我们结合几个工具,构建一个骨干框架,这将作为我们的管道,在其中我们将添加和删除参数、层、功能和各种其他附加组件,以真正理解发生了什么。按照书籍GitHub网站上的代码(参见[*http://PracticalDeepLearning.ai*](http://PracticalDeepLearning.ai)),您可以在Colab中的浏览器中交互式运行此代码,针对100多个数据集进行修改。此外,您可以将其修改为大多数分类任务。 - en: Basic Transfer Learning Pipeline + id: totrans-142 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 基本迁移学习管道 - en: First, let’s build this end-to-end example for transfer learning. + id: totrans-143 prefs: [] type: TYPE_NORMAL + zh: 首先,让我们为迁移学习构建这个端到端示例。 - en: '[PRE16]' + id: totrans-144 prefs: [] type: TYPE_PRE + zh: '[PRE16]' - en: Basic Custom Network Pipeline + id: totrans-145 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 基本自定义网络管道 - en: 'Apart from transfer learning on state-of-the-art models, we can also experiment and develop better intuitions by building our own custom network. Only the model needs to be swapped in the previously defined transfer learning code:' + id: totrans-146 prefs: [] type: TYPE_NORMAL + zh: 除了在最先进的模型上进行迁移学习外,我们还可以通过构建自己的自定义网络来进行实验和开发更好的直觉。只需在先前定义的迁移学习代码中交换模型即可: - en: '[PRE17]' + id: totrans-147 prefs: [] type: TYPE_PRE + zh: '[PRE17]' - en: Now, it’s time to use our pipeline for various experiments. + id: totrans-148 prefs: [] type: TYPE_NORMAL + zh: 现在,是时候利用我们的管道进行各种实验了。 - en: How Hyperparameters Affect Accuracy + id: totrans-149 prefs: - PREF_H1 type: TYPE_NORMAL + zh: 超参数如何影响准确性 - en: In this section, we aim to modify various parameters of a deep learning pipeline one at a time—from the number of layers fine-tuned, to the choice of the activation function used—and see its effect primarily on validation accuracy. Additionally, when relevant, we also observe its effect on the speed of training and time to reach the best accuracy (i.e., convergence). + id: totrans-150 prefs: [] type: TYPE_NORMAL + zh: 在本节中,我们旨在逐一修改深度学习管道的各种参数——从微调的层数到使用的激活函数的选择——主要看其对验证准确性的影响。此外,当相关时,我们还观察其对训练速度和达到最佳准确性的时间(即收敛)的影响。 - en: 'Our experimentation setup is as follows:' + id: totrans-151 prefs: [] type: TYPE_NORMAL + zh: 我们的实验设置如下: - en: To reduce experimentation time, we have used a faster architecture—MobileNet—in this chapter. + id: totrans-152 prefs: - PREF_UL type: TYPE_NORMAL + zh: 为了减少实验时间,本章中我们使用了一个更快的架构——MobileNet。 - en: We reduced the input image resolution to 128 x 128 pixels to further speed up training. In general, we would recommend using a higher resolution (at least 224 x 224) for production systems. + id: totrans-153 prefs: - PREF_UL type: TYPE_NORMAL + zh: 我们将输入图像分辨率降低到128 x 128像素以进一步加快训练速度。一般来说,我们建议在生产系统中使用更高的分辨率(至少224 x 224)。 - en: Early stopping is applied to stop experiments if they don’t increase in accuracy for 10 consecutive epochs. + id: totrans-154 prefs: - PREF_UL type: TYPE_NORMAL + zh: 如果实验连续10个时期准确率不增加,将应用早停。 - en: For training with transfer learning, we generally unfreeze the last 33% of the layers. + id: totrans-155 prefs: - PREF_UL type: TYPE_NORMAL + zh: 对于使用迁移学习进行训练,通常解冻最后33%的层。 - en: Learning rate is set to 0.001 with Adam optimizer. + id: totrans-156 prefs: - PREF_UL type: TYPE_NORMAL + zh: 学习率设置为0.001,使用Adam优化器。 - en: We’re mostly using the Oxford Flowers 102 dataset for testing, unless otherwise stated. We chose this dataset because it is reasonably difficult to train on due to the large number of classes it contains (102) and the similarities between many of the classes that force networks to develop a fine-grained understanding of features in order to do well. + id: totrans-157 prefs: - PREF_UL type: TYPE_NORMAL + zh: 除非另有说明,我们主要使用牛津花卉102数据集进行测试。我们选择这个数据集是因为它相对难以训练,包含了大量类别(102个)以及许多类别之间的相似之处,这迫使网络对特征进行细粒度理解以取得良好的效果。 - en: To make apples-to-apples comparisons, we take the maximum accuracy value in a particular experiment and normalize all other accuracy values within that experiment with respect to this maximum value. + id: totrans-158 prefs: - PREF_UL type: TYPE_NORMAL + zh: 为了进行苹果与苹果的比较,我们取特定实验中的最大准确性值,并将该实验中的所有其他准确性值相对于该最大值进行归一化。 - en: Based on these and other experiments, we have compiled a checklist of actionable tips to implement in your next model training adventure. These are available on the book’s GitHub (see [*http://PracticalDeepLearning.ai*](http://PracticalDeepLearning.ai)) along with interactive visualizations. If you have more tips, feel free to tweet them [@PracticalDLBook](https://twitter.com/PracticalDLBook) or submit a pull request. + id: totrans-159 prefs: [] type: TYPE_NORMAL + zh: 基于这些和其他实验,我们总结了一份可操作的提示清单,可在下一个模型训练冒险中实施。这些内容可以在本书的GitHub(参见[*http://PracticalDeepLearning.ai*](http://PracticalDeepLearning.ai))上找到,还有交互式可视化。如果您有更多提示,请随时在推特上发表[@PracticalDLBook](https://twitter.com/PracticalDLBook)或提交拉取请求。 - en: Transfer Learning Versus Training from Scratch + id: totrans-160 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 迁移学习与从头开始训练 - en: Experimental setup + id: totrans-161 prefs: [] type: TYPE_NORMAL + zh: 实验设置 - en: 'Train two models: one using transfer learning, and one from scratch on the same dataset.' + id: totrans-162 prefs: [] type: TYPE_NORMAL + zh: 训练两个模型:一个使用迁移学习,一个从头开始在相同数据集上训练。 - en: Datasets used + id: totrans-163 prefs: [] type: TYPE_NORMAL + zh: 使用的数据集 - en: Oxford Flowers 102, Colorectal Histology + id: totrans-164 prefs: [] type: TYPE_NORMAL + zh: 牛津花卉102,结肠组织学 - en: Architectures used + id: totrans-165 prefs: [] type: TYPE_NORMAL + zh: 使用的架构 - en: Pretrained MobileNet, Custom model + id: totrans-166 prefs: [] type: TYPE_NORMAL + zh: 预训练的MobileNet,自定义模型 - en: '[Figure 5-10](part0007.html#comparing_transfer_learning_versus_train) shows the results.' + id: totrans-167 prefs: [] type: TYPE_NORMAL + zh: '[图5-10](part0007.html#comparing_transfer_learning_versus_train)显示了结果。' - en: '![Comparing transfer learning versus training a custom model on different datasets](../images/00194.jpeg)' + id: totrans-168 prefs: [] type: TYPE_IMG + zh: '![比较在不同数据集上进行迁移学习和训练自定义模型](../images/00194.jpeg)' - en: Figure 5-10\. Comparing transfer learning versus training a custom model on different datasets + id: totrans-169 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-10。比较在不同数据集上进行迁移学习和训练自定义模型 - en: 'Here are the key takeaways:' + id: totrans-170 prefs: [] type: TYPE_NORMAL + zh: 以下是关键要点: - en: Transfer learning leads to a quicker rise in accuracy during training by reusing previously learned features. + id: totrans-171 prefs: - PREF_UL type: TYPE_NORMAL + zh: 通过重复使用先前学习的特征,迁移学习可以使训练期间的准确性迅速提高。 - en: Although it is expected that transfer learning (based on pretrained models on ImageNet) would work when the target dataset is also of natural imagery, the patterns learned in the early layers by a network work surprisingly well for datasets beyond ImageNet. That does not necessarily mean that it will yield the best results, but it can get close. When the images match more real-world images that the model was pretrained on, we get relatively quick improvement in accuracy. + id: totrans-172 prefs: - PREF_UL type: TYPE_NORMAL + zh: 尽管预计基于ImageNet上预训练模型的迁移学习在目标数据集也是自然图像时会起作用,但网络在早期层学习的模式对超出ImageNet范围的数据集也能奇妙地奏效。这并不一定意味着它会产生最佳结果,但可以接近。当图像与模型预训练的更多真实世界图像匹配时,我们可以相对快速地提高准确性。 - en: Effect of Number of Layers Fine-Tuned in Transfer Learning + id: totrans-173 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 迁移学习中微调层数的影响 - en: Experimental setup + id: totrans-174 prefs: [] type: TYPE_NORMAL + zh: 实验设置 - en: Vary the percentage of trainable layers from 0 to 100% + id: totrans-175 prefs: [] type: TYPE_NORMAL + zh: 将可训练层的百分比从0变化到100% - en: Dataset used + id: totrans-176 prefs: [] type: TYPE_NORMAL + zh: 使用的数据集 - en: Oxford Flowers 102 + id: totrans-177 prefs: [] type: TYPE_NORMAL + zh: 牛津花卉102 - en: Architecture used + id: totrans-178 prefs: [] type: TYPE_NORMAL + zh: 使用的架构 - en: Pretrained MobileNet + id: totrans-179 prefs: [] type: TYPE_NORMAL + zh: 预训练的MobileNet - en: '[Figure 5-11](part0007.html#effect_of_percent_layers_fine-tuned_on_m) shows the results.' + id: totrans-180 prefs: [] type: TYPE_NORMAL + zh: '[图5-11](part0007.html#effect_of_percent_layers_fine-tuned_on_m)显示了结果。' - en: '![Effect of % layers fine-tuned on model accuracy](../images/00159.jpeg)' + id: totrans-181 prefs: [] type: TYPE_IMG + zh: '![微调的层数对模型准确性的影响](../images/00159.jpeg)' - en: Figure 5-11\. Effect of % layers fine-tuned on model accuracy + id: totrans-182 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-11。微调的层数对模型准确性的影响 - en: 'Here are the key takeaways:' + id: totrans-183 prefs: [] type: TYPE_NORMAL + zh: 以下是关键要点: - en: The higher the number of layers fine-tuned, the fewer epochs it took to reach convergence and the higher the accuracy. + id: totrans-184 prefs: - PREF_UL type: TYPE_NORMAL + zh: 微调的层数越多,达到收敛所需的纪元越少,准确性越高。 - en: The higher the number of layers fine-tuned, the more time it took per epoch for training, due to more computation and updates involved. + id: totrans-185 prefs: - PREF_UL type: TYPE_NORMAL + zh: 微调的层数越多,每个纪元的训练时间就越长,因为涉及更多的计算和更新。 - en: For a dataset that required fine-grained understanding of images, making more layers task specific by unfreezing them was the key to a better model. + id: totrans-186 prefs: - PREF_UL type: TYPE_NORMAL + zh: 对于需要对图像进行细粒度理解的数据集,通过解冻更多层使其更具任务特定性是获得更好模型的关键。 - en: Effect of Data Size on Transfer Learning + id: totrans-187 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 数据大小对迁移学习的影响 - en: Experimental setup + id: totrans-188 prefs: [] type: TYPE_NORMAL + zh: 实验设置 - en: Add one image per class at a time + id: totrans-189 prefs: [] type: TYPE_NORMAL + zh: 每次添加一个类别的图像 - en: Dataset used + id: totrans-190 prefs: [] type: TYPE_NORMAL + zh: 使用的数据集 - en: Cats versus dogs + id: totrans-191 prefs: [] type: TYPE_NORMAL + zh: 猫与狗 - en: Architecture used + id: totrans-192 prefs: [] type: TYPE_NORMAL + zh: 使用的架构 - en: Pretrained MobileNet + id: totrans-193 prefs: [] type: TYPE_NORMAL + zh: 预训练的MobileNet - en: '[Figure 5-12](part0007.html#effect_of_the_amount_of_data_per_categor) shows the results.' + id: totrans-194 prefs: [] type: TYPE_NORMAL + zh: '[图5-12](part0007.html#effect_of_the_amount_of_data_per_categor)显示了结果。' - en: '![Effect of the amount of data per category on model accuracy](../images/00108.jpeg)' + id: totrans-195 prefs: [] type: TYPE_IMG + zh: '![每个类别数据量对模型准确性的影响](../images/00108.jpeg)' - en: Figure 5-12\. Effect of the amount of data per category on model accuracy + id: totrans-196 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-12。每个类别数据量对模型准确性的影响 - en: 'Here are the key takeaways:' + id: totrans-197 prefs: [] type: TYPE_NORMAL + zh: 以下是关键要点: - en: Even with only three images in each class, the model was able to predict with close to 90% accuracy. This shows how powerful transfer learning can be in reducing data requirements. + id: totrans-198 prefs: - PREF_UL type: TYPE_NORMAL + zh: 即使每个类别只有三张图像,模型也能够以接近90%的准确性进行预测。这显示了迁移学习在减少数据需求方面的强大作用。 - en: Because ImageNet has several cats and dogs, pretrained networks on ImageNet suited our dataset much more easily. More difficult datasets like Oxford Flowers 102 might require a much higher number of images to achieve similar accuracies. + id: totrans-199 prefs: - PREF_UL type: TYPE_NORMAL + zh: 由于ImageNet有几个猫和狗,所以在ImageNet上预训练的网络更容易适应我们的数据集。像牛津花卉102这样更困难的数据集可能需要更多的图像才能达到类似的准确性。 - en: Effect of Learning Rate + id: totrans-200 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 学习率的影响 - en: Experimental setup + id: totrans-201 prefs: [] type: TYPE_NORMAL + zh: 实验设置 - en: Vary the learning rate between .1, .01, .001, and .0001 + id: totrans-202 prefs: [] type: TYPE_NORMAL + zh: 在0.1、0.01、0.001和0.0001之间变化学习率 - en: Dataset used + id: totrans-203 prefs: [] type: TYPE_NORMAL + zh: 使用的数据集 - en: Oxford Flowers 102 + id: totrans-204 prefs: [] type: TYPE_NORMAL + zh: 牛津花卉102 - en: Architecture used + id: totrans-205 prefs: [] type: TYPE_NORMAL + zh: 使用的架构 - en: Pretrained MobileNet + id: totrans-206 prefs: [] type: TYPE_NORMAL + zh: 预训练的MobileNet - en: '[Figure 5-13](part0007.html#effect_of_learning_rate_on_model_accurac) shows the results.' + id: totrans-207 prefs: [] type: TYPE_NORMAL + zh: '[图5-13](part0007.html#effect_of_learning_rate_on_model_accurac)显示了结果。' - en: '![Effect of learning rate on model accuracy and speed of convergence](../images/00071.jpeg)' + id: totrans-208 prefs: [] type: TYPE_IMG + zh: '![学习率对模型准确性和收敛速度的影响](../images/00071.jpeg)' - en: Figure 5-13\. Effect of learning rate on model accuracy and speed of convergence + id: totrans-209 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-13\. 学习率对模型准确性和收敛速度的影响 - en: 'Here are the key takeaways:' + id: totrans-210 prefs: [] type: TYPE_NORMAL + zh: 以下是关键要点: - en: Too high of a learning rate, and the model might never converge. + id: totrans-211 prefs: - PREF_UL type: TYPE_NORMAL + zh: 学习率过高,模型可能永远无法收敛。 - en: Too low a learning rate results in a long time taken to convergence. + id: totrans-212 prefs: - PREF_UL type: TYPE_NORMAL + zh: 学习率过低会导致收敛所需时间过长。 - en: Striking the right balance is crucial in training quickly. + id: totrans-213 prefs: - PREF_UL type: TYPE_NORMAL + zh: 在快速训练中找到合适的平衡至关重要。 - en: Effect of Optimizers + id: totrans-214 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 优化器的影响 - en: Experimental setup + id: totrans-215 prefs: [] type: TYPE_NORMAL + zh: 实验设置 - en: Experiment with available optimizers including AdaDelta, AdaGrad, Adam, Gradient Descent, Momentum, and RMSProp + id: totrans-216 prefs: [] type: TYPE_NORMAL + zh: 尝试可用的优化器,包括AdaDelta、AdaGrad、Adam、梯度下降、动量和RMSProp - en: Dataset used + id: totrans-217 prefs: [] type: TYPE_NORMAL + zh: 使用的数据集 - en: Oxford Flowers 102 + id: totrans-218 prefs: [] type: TYPE_NORMAL + zh: 牛津花卉102 - en: Architecture used + id: totrans-219 prefs: [] type: TYPE_NORMAL + zh: 使用的架构 - en: Pretrained MobileNet + id: totrans-220 prefs: [] type: TYPE_NORMAL + zh: 预训练的MobileNet - en: '[Figure 5-14](part0007.html#effect_of_different_optimizers_on_the_sp) shows the results.' + id: totrans-221 prefs: [] type: TYPE_NORMAL + zh: '[图5-14](part0007.html#effect_of_different_optimizers_on_the_sp)显示了结果。' - en: '![Effect of different optimizers on the speed of convergence](../images/00030.jpeg)' + id: totrans-222 prefs: [] type: TYPE_IMG + zh: '![不同优化器对收敛速度的影响](../images/00030.jpeg)' - en: Figure 5-14\. Effect of different optimizers on the speed of convergence + id: totrans-223 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-14\. 不同优化器对收敛速度的影响 - en: 'Here are the key takeaways:' + id: totrans-224 prefs: [] type: TYPE_NORMAL + zh: 以下是关键要点: - en: Adam is a great choice for faster convergence to high accuracy. + id: totrans-225 prefs: - PREF_UL type: TYPE_NORMAL + zh: Adam是更快收敛到高准确性的不错选择。 - en: RMSProp is usually better for RNN tasks. + id: totrans-226 prefs: - PREF_UL type: TYPE_NORMAL + zh: RMSProp通常更适用于RNN任务。 - en: Effect of Batch Size + id: totrans-227 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 批量大小的影响 - en: Experimental setup + id: totrans-228 prefs: [] type: TYPE_NORMAL + zh: 实验设置 - en: Vary batch sizes in powers of two + id: totrans-229 prefs: [] type: TYPE_NORMAL + zh: 以2的幂变化批量大小 - en: Dataset used + id: totrans-230 prefs: [] type: TYPE_NORMAL + zh: 使用的数据集 - en: Oxford Flowers 102 + id: totrans-231 prefs: [] type: TYPE_NORMAL + zh: 牛津花卉102 - en: Architecture used + id: totrans-232 prefs: [] type: TYPE_NORMAL + zh: 使用的架构 - en: Pretrained + id: totrans-233 prefs: [] type: TYPE_NORMAL + zh: 预训练 - en: '[Figure 5-15](part0007.html#effect_of_batch_size_on_accuracy_and_spe) shows the results.' + id: totrans-234 prefs: [] type: TYPE_NORMAL + zh: '[图5-15](part0007.html#effect_of_batch_size_on_accuracy_and_spe)显示了结果。' - en: '![Effect of batch size on accuracy and speed of convergence](../images/00317.jpeg)' + id: totrans-235 prefs: [] type: TYPE_IMG + zh: '![批量大小对准确性和收敛速度的影响](../images/00317.jpeg)' - en: Figure 5-15\. Effect of batch size on accuracy and speed of convergence + id: totrans-236 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-15\. 批量大小对准确性和收敛速度的影响 - en: 'Here are the key takeaways:' + id: totrans-237 prefs: [] type: TYPE_NORMAL + zh: 以下是关键要点: - en: The higher the batch size, the more the instability in results from epoch to epoch, with bigger rises and drops. But higher accuracy also leads to more efficient GPU utilization, so faster speed per epoch. + id: totrans-238 prefs: - PREF_UL type: TYPE_NORMAL + zh: 批量大小越大,结果从一个时期到另一个时期的不稳定性就越大,波动也越大。但更高的准确性也会导致更高效的GPU利用率,因此每个时期的速度更快。 - en: Too low a batch size slows the rise in accuracy. + id: totrans-239 prefs: - PREF_UL type: TYPE_NORMAL + zh: 批量大小过低会减缓准确性的提升。 - en: 16/32/64 are good to start batch sizes with. + id: totrans-240 prefs: - PREF_UL type: TYPE_NORMAL + zh: 16/32/64是很好的起始批量大小。 - en: Effect of Resizing + id: totrans-241 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 调整大小的影响 - en: Experimental setup + id: totrans-242 prefs: [] type: TYPE_NORMAL + zh: 实验设置 - en: Change image size to 128x128, 224x224 + id: totrans-243 prefs: [] type: TYPE_NORMAL + zh: 将图像大小改为128x128、224x224 - en: Dataset used + id: totrans-244 prefs: [] type: TYPE_NORMAL + zh: 使用的数据集 - en: Oxford Flowers 102 + id: totrans-245 prefs: [] type: TYPE_NORMAL + zh: 牛津花卉102 - en: Architecture used + id: totrans-246 prefs: [] type: TYPE_NORMAL + zh: 使用的架构 - en: Pretrained + id: totrans-247 prefs: [] type: TYPE_NORMAL + zh: 预训练 - en: '[Figure 5-16](part0007.html#effect_of_image_size_on_accuracy) shows the results.' + id: totrans-248 prefs: [] type: TYPE_NORMAL + zh: '[图5-16](part0007.html#effect_of_image_size_on_accuracy)显示了结果。' - en: '![Effect of image size on accuracy](../images/00275.jpeg)' + id: totrans-249 prefs: [] type: TYPE_IMG + zh: '![图像大小对准确性的影响](../images/00275.jpeg)' - en: Figure 5-16\. Effect of image size on accuracy + id: totrans-250 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-16\. 图像大小对准确性的影响 - en: 'Here are the key takeaways:' + id: totrans-251 prefs: [] type: TYPE_NORMAL + zh: 以下是关键要点: - en: Even with a third of the pixels, there wasn’t a significant difference in validation accuracies. On the one hand, this shows the robustness of CNNs. It might partly be because the Oxford Flowers 102 dataset has close-ups of flowers visible. For datasets in which the objects have much smaller portions in an image, the results might be lower. + id: totrans-252 prefs: - PREF_UL type: TYPE_NORMAL + zh: 即使像素只有三分之一,验证准确性也没有显著差异。这一方面显示了CNN的稳健性。这可能部分是因为牛津花卉102数据集中有花朵的特写可见。对于对象在图像中占比较小的数据集,结果可能较低。 - en: Effect of Change in Aspect Ratio on Transfer Learning + id: totrans-253 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 宽高比变化对迁移学习的影响 - en: Experimental Setup + id: totrans-254 prefs: [] type: TYPE_NORMAL + zh: 实验设置 - en: Take images of various aspect ratios (width:height ratio) and resize them to a square (1:1 aspect ratio). + id: totrans-255 prefs: [] type: TYPE_NORMAL + zh: 拍摄具有不同宽高比(宽:高比)的图像,并将它们调整为正方形(1:1宽高比)。 - en: Dataset Used + id: totrans-256 prefs: [] type: TYPE_NORMAL + zh: 使用的数据集 - en: Cats vs. Dogs + id: totrans-257 prefs: [] type: TYPE_NORMAL + zh: 猫与狗 - en: Architecture Used + id: totrans-258 prefs: [] type: TYPE_NORMAL + zh: 使用的架构 - en: Pretrained + id: totrans-259 prefs: [] type: TYPE_NORMAL + zh: 预训练 - en: '[Figure 5-17](part0007.html#distribution_of_aspect_ratio_and_corresp) shows the results.' + id: totrans-260 prefs: [] type: TYPE_NORMAL + zh: '[图5-17](part0007.html#distribution_of_aspect_ratio_and_corresp)显示了结果。' - en: '![Distribution of aspect ratio and corresponding accuracies in images](../images/00168.jpeg)' + id: totrans-261 prefs: [] type: TYPE_IMG + zh: '![图像中宽高比和对应准确性的分布](../images/00168.jpeg)' - en: Figure 5-17\. Distribution of aspect ratio and corresponding accuracies in images + id: totrans-262 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图5-17\. 图像中宽高比和对应准确性的分布 - en: 'Here are the key takeaways:' + id: totrans-263 prefs: [] type: TYPE_NORMAL + zh: 以下是关键要点: - en: Most common aspect ratio is 4:3; that is, 1.33, whereas our neural networks are generally trained at 1:1 ratio. + id: totrans-264 prefs: - PREF_UL type: TYPE_NORMAL + zh: 最常见的宽高比是4:3,即1.33,而我们的神经网络通常在1:1的比例下进行训练。 - en: Neural networks are relatively robust to minor modifications in aspect ratio brought upon by resizing to a square shape. Even up to 2.0 ratio gives decent results. + id: totrans-265 prefs: - PREF_UL type: TYPE_NORMAL + zh: 神经网络对由调整为正方形形状引起的宽高比的轻微修改相对稳健。即使达到2.0的比例也能得到不错的结果。 - en: Tools to Automate Tuning for Maximum Accuracy + id: totrans-266 prefs: - PREF_H1 type: TYPE_NORMAL + zh: 自动调整工具以获得最大准确性 - en: As we have seen since the rise of the nineteenth century, automation has always led to an increase in productivity. In this section, we investigate tools that can help us automate the search for the best model. + id: totrans-267 prefs: [] type: TYPE_NORMAL + zh: 正如我们自19世纪以来所看到的,自动化总是导致生产力的提高。在本节中,我们研究可以帮助我们自动搜索最佳模型的工具。 - en: Keras Tuner + id: totrans-268 prefs: - PREF_H2 type: TYPE_NORMAL + zh: Keras调谐器 - en: With so many potential combinations of hyperparameters to tune, coming up with the best model can be a tedious process. Often two or more parameters might have correlated effects on the overall speed of convergence as well as validation accuracy, so tuning one at a time might not lead to the best model. And if curiosity gets the best of us, we might want to experiment on all the hyperparameters together. + id: totrans-269 prefs: [] type: TYPE_NORMAL - en: 'Keras Tuner comes in to automate this hyperparameter search. We define a search @@ -1159,44 +1685,59 @@ The following code example adapted from Keras Tuner documentation showcases searching through the different model architectures (varying in the number of layers between 2 and 10) as well as varying the learning rate (between 0.1 and 0.001):' + id: totrans-270 prefs: [] type: TYPE_NORMAL - en: '[PRE18]' + id: totrans-271 prefs: [] type: TYPE_PRE + zh: '[PRE18]' - en: 'Each experiment will show values like this:' + id: totrans-272 prefs: [] type: TYPE_NORMAL - en: '[PRE19]' + id: totrans-273 prefs: [] type: TYPE_PRE + zh: '[PRE19]' - en: On the experiment end, the result summary gives a snapshot of the experiments conducted so far, and saves more metadata. + id: totrans-274 prefs: [] type: TYPE_NORMAL - en: '[PRE20]' + id: totrans-275 prefs: [] type: TYPE_PRE + zh: '[PRE20]' - en: 'Another big benefit is the ability to track experiments online in real time and get notifications on their progress by visiting [*http://keras-tuner.appspot.com*](http://keras-tuner.appspot.com), getting an API key (from Google App Engine), and entering the following line in our Python program along with the real API key:' + id: totrans-276 prefs: [] type: TYPE_NORMAL - en: '[PRE21]' + id: totrans-277 prefs: [] type: TYPE_PRE + zh: '[PRE21]' - en: Due to the potentially large combinatorial space, random search is preferred to grid search as a more practical way to get to a good solution on a limited experimentation budget. But there are faster ways, including Hyperband (Lisha Li et al.), whose implementation is also available in Keras Tuner. + id: totrans-278 prefs: [] type: TYPE_NORMAL - en: For computer-vision problems, Keras Tuner includes ready-to-use tunable applications like HyperResNet. + id: totrans-279 prefs: [] type: TYPE_NORMAL - en: AutoAugment + id: totrans-280 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1209,28 +1750,36 @@ Cubuk et al. to come up with the new state-of-the-art ImageNet validation numbers.) By learning the best combination of augmentation parameters on ImageNet, we can readily apply it to our problem. + id: totrans-281 prefs: [] type: TYPE_NORMAL - en: 'Applying the prelearned augmentation strategy from ImageNet is pretty simple:' + id: totrans-282 prefs: [] type: TYPE_NORMAL - en: '[PRE22]' + id: totrans-283 prefs: [] type: TYPE_PRE + zh: '[PRE22]' - en: '[Figure 5-18](part0007.html#output_of_augmentation_strategies_learne) displays the results.' + id: totrans-284 prefs: [] type: TYPE_NORMAL - en: '![Output of augmentation strategies learned by reinforcement learning on the ImageNet dataset](../images/00140.jpeg)' + id: totrans-285 prefs: [] type: TYPE_IMG - en: Figure 5-18\. Output of augmentation strategies learned by reinforcement learning on the ImageNet dataset + id: totrans-286 prefs: - PREF_H6 type: TYPE_NORMAL - en: AutoKeras + id: totrans-287 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1243,24 +1792,30 @@ focus on making train faster in 2018\. And now with AutoKeras (Haifeng Jin et al.), we can also apply this state-of-the-art technique on our particular datasets in a relatively accessible manner. + id: totrans-288 prefs: [] type: TYPE_NORMAL - en: 'Generating new model architectures with AutoKeras is a matter of supplying our images and associated labels as well as a time limit by which to finish running the jobs. Internally, it implements several optimization algorithms, including a Bayesian optimization approach to search for an optimal architecture:' + id: totrans-289 prefs: [] type: TYPE_NORMAL - en: '[PRE23]' + id: totrans-290 prefs: [] type: TYPE_PRE + zh: '[PRE23]' - en: Post-training, we are all eager to learn how the new model architecture looks. Unlike most of the cleaner-looking images we generally get to see, this will look pretty obfuscated to understand or print out. But what we do find faith in is that it yields high accuracy. + id: totrans-291 prefs: [] type: TYPE_NORMAL - en: Summary + id: totrans-292 prefs: - PREF_H1 type: TYPE_NORMAL @@ -1277,5 +1832,6 @@ a little extra edge. We hope that the material covered in this chapter will help make your models more robust, reduce bias, make them more explainable, and ultimately contribute to the responsible development of AI. + id: totrans-293 prefs: [] type: TYPE_NORMAL