diff --git a/totrans/prac-dl-cld_12.yaml b/totrans/prac-dl-cld_12.yaml index 19771df..c698c9a 100644 --- a/totrans/prac-dl-cld_12.yaml +++ b/totrans/prac-dl-cld_12.yaml @@ -765,6 +765,8 @@ id: totrans-108 prefs: [] type: TYPE_NORMAL + zh: 使用Create ML的一个主要动机是其输出的模型大小。完整模型可以分解为基础模型(生成特征)和更轻的特定任务分类层。苹果将基础模型内置到其每个操作系统中。因此,Create + ML只需要输出特定任务的分类器。这些模型有多小?仅几千字节(与MobileNet模型的15 MB相比,后者已经相当小了)。在越来越多的应用开发人员开始将深度学习整合到其应用中的今天,这一点至关重要。同一神经网络不需要在多个应用程序中不必要地复制,消耗宝贵的存储空间。 - en: In short, Create ML is easy, speedy, and tiny. Sounds too good to be true. Turns out the flip-side of having full vertical integration is that the developers are tied into the Apple ecosystem. Create ML exports only *.mlmodel* files, which @@ -774,10 +776,13 @@ id: totrans-109 prefs: [] type: TYPE_NORMAL + zh: 简而言之,Create ML易于使用,速度快,体积小。听起来太好了。事实证明,完全垂直集成的反面是开发人员被绑定到苹果生态系统中。Create ML只导出*.mlmodel*文件,这些文件只能在iOS、iPadOS、macOS、tvOS和watchOS等苹果操作系统上使用。遗憾的是,Create + ML尚未实现与Android的集成。 - en: 'In this section, we build the Not Hotdog classifier using Create ML:' id: totrans-110 prefs: [] type: TYPE_NORMAL + zh: 在本节中,我们使用Create ML构建Not Hotdog分类器: - en: Open the Create ML app, click New Document, and select the Image Classifier template from among the several options available (including Sound, Activity, Text, Tabular), as shown in [Figure 12-10](part0014.html#choosing_a_template_for_a_new_project). @@ -787,22 +792,27 @@ prefs: - PREF_OL type: TYPE_NORMAL + zh: 打开Create ML应用程序,点击新建文档,从可用的几个选项中选择图像分类器模板(包括声音、活动、文本、表格),如[图12-10](part0014.html#choosing_a_template_for_a_new_project)所示。请注意,这仅适用于Xcode + 11(或更高版本),macOS 10.15(或更高版本)。 - en: '![Choosing a template for a new project](../images/00277.jpeg)' id: totrans-112 prefs: - PREF_IND type: TYPE_IMG + zh: '![选择新项目的模板](../images/00277.jpeg)' - en: Figure 12-10\. Choosing a template for a new project id: totrans-113 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图12-10。选择新项目的模板 - en: In the next screen, enter a name for the project, and then select Done. id: totrans-114 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在下一个屏幕中,输入项目名称,然后选择完成。 - en: We need to sort the data into the correct directory structure. As [Figure 12-11](part0014.html#train_and_test_data_in_separate_director) illustrates, we place images in directories that have the names of their labels. It is useful to have separate train and test datasets with their corresponding @@ -811,33 +821,39 @@ prefs: - PREF_OL type: TYPE_NORMAL + zh: 我们需要将数据分类到正确的目录结构中。如[图12-11](part0014.html#train_and_test_data_in_separate_director)所示,我们将图像放在以其标签名称命名的目录中。将训练和测试数据分别放在相应的目录中是有用的。 - en: '![Train and test data in separate directories](../images/00235.jpeg)' id: totrans-116 prefs: - PREF_IND type: TYPE_IMG + zh: '![将训练和测试数据放在不同的目录中](../images/00235.jpeg)' - en: Figure 12-11\. Train and test data in separate directories id: totrans-117 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图12-11。将训练和测试数据放在不同的目录中 - en: Point the UI to the training and test data directories, as shown in [Figure 12-12](part0014.html#training_interface_in_create_ml). id: totrans-118 prefs: - PREF_OL type: TYPE_NORMAL + zh: 将UI指向训练和测试数据目录,如[图12-12](part0014.html#training_interface_in_create_ml)所示。 - en: '![Training interface in Create ML](../images/00200.jpeg)' id: totrans-119 prefs: - PREF_IND type: TYPE_IMG + zh: '![Create ML中的训练界面](../images/00200.jpeg)' - en: Figure 12-12\. Training interface in Create ML id: totrans-120 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图12-12。Create ML中的训练界面 - en: '[Figure 12-12](part0014.html#training_interface_in_create_ml) shows the UI after you select the train and test data directories. Notice that the validation data was automatically selected by Create ML. Additionally, notice the augmentation @@ -848,23 +864,28 @@ prefs: - PREF_OL type: TYPE_NORMAL + zh: 在选择训练和测试数据目录后,[图12-12](part0014.html#training_interface_in_create_ml)显示了UI。请注意,验证数据是由Create + ML自动选择的。此外,请注意可用的增强选项。在这一点上,我们可以点击播放按钮(右向三角形;参见[图12-13](part0014.html#create_ml_screen_that_opens_after_loadin))开始训练过程。 - en: '![Create ML screen that opens after loading train and test data](../images/00164.jpeg)' id: totrans-122 prefs: - PREF_IND type: TYPE_IMG + zh: '![加载训练和测试数据后打开的Create ML屏幕](../images/00164.jpeg)' - en: Figure 12-13\. Create ML screen that opens after loading train and test data id: totrans-123 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图12-13。加载训练和测试数据后打开的Create ML屏幕 - en: Note id: totrans-124 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 注意 - en: As you experiment, you will quickly notice that each augmentation that we add will make the training slower. To set a quick baseline performance metric, we should avoid using augmentations in the first run. Subsequently, we can experiment @@ -874,6 +895,7 @@ prefs: - PREF_IND type: TYPE_NORMAL + zh: 当您进行实验时,您会很快注意到每个添加的增强都会使训练变慢。为了设定一个快速的基准性能指标,我们应该避免在第一次运行中使用增强。随后,我们可以尝试添加更多增强来评估它们对模型质量的影响。 - en: When the training completes, we can see how the model performed on the training data, (auto-selected) validation data, and the test data, as depicted in [Figure 12-14](part0014.html#the_create_ml_screen_after_training_comp). At the bottom of the screen, we can also observe how long the training process @@ -883,17 +905,21 @@ prefs: - PREF_OL type: TYPE_NORMAL + zh: 当训练完成时,我们可以看到模型在训练数据、(自动选择的)验证数据和测试数据上的表现,如[图12-14](part0014.html#the_create_ml_screen_after_training_comp)所示。在屏幕底部,我们还可以看到训练过程花费的时间以及最终模型的大小。在不到两分钟内达到97%的测试准确率。而且输出只有17 + KB。相当不错。 - en: '![The Create ML screen after training completes](../images/00113.jpeg)' id: totrans-127 prefs: - PREF_IND type: TYPE_IMG + zh: '![训练完成后的Create ML屏幕](../images/00113.jpeg)' - en: Figure 12-14\. The Create ML screen after training completes id: totrans-128 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图12-14。训练完成后的Create ML屏幕 - en: We’re so close now—we just need to export the final model. Drag the Output button (highlighted in [Figure 12-14](part0014.html#the_create_ml_screen_after_training_comp)) to the desktop to create the *.mlmodel* file. diff --git a/totrans/prac-dl-cld_13.yaml b/totrans/prac-dl-cld_13.yaml index 85bc681..feec264 100644 --- a/totrans/prac-dl-cld_13.yaml +++ b/totrans/prac-dl-cld_13.yaml @@ -1,5 +1,6 @@ - en: 'Chapter 13\. Shazam for Food: Developing Android Apps with TensorFlow Lite and ML Kit' + id: totrans-0 prefs: - PREF_H1 type: TYPE_NORMAL @@ -11,12 +12,15 @@ its own good and was acquired by Periscope. The original vision of his investor, Erlich Bachman, remains unfulfilled. In this chapter, our mission is to fulfill this dream. + id: totrans-1 prefs: [] type: TYPE_NORMAL - en: '![Not Hotdog app listing on the Apple App Store](../images/00282.jpeg)' + id: totrans-2 prefs: [] type: TYPE_IMG - en: Figure 13-1\. Not Hotdog app listing on the Apple App Store + id: totrans-3 prefs: - PREF_H6 type: TYPE_NORMAL @@ -25,37 +29,47 @@ it could scan a few ingredients, and recommend a recipe based on them. Or, it could even look at a product in the market, and check whether it contains any blacklisted ingredients such as specific allergens. + id: totrans-4 prefs: [] type: TYPE_NORMAL - en: 'This is an interesting problem to solve for several reasons because it represents several challenges:' + id: totrans-5 prefs: [] type: TYPE_NORMAL - en: Data collection challenge + id: totrans-6 prefs: [] type: TYPE_NORMAL - en: There are more than a hundred cuisines around the world, each with hundreds if not thousands of dishes. + id: totrans-7 prefs: [] type: TYPE_NORMAL - en: Accuracy challenge + id: totrans-8 prefs: [] type: TYPE_NORMAL - en: It should be right most of the time. + id: totrans-9 prefs: [] type: TYPE_NORMAL - en: Performance challenge + id: totrans-10 prefs: [] type: TYPE_NORMAL - en: It should run near instantly. + id: totrans-11 prefs: [] type: TYPE_NORMAL - en: Platform challenge + id: totrans-12 prefs: [] type: TYPE_NORMAL - en: An iPhone app alone would be insufficient. A lot of users in developing countries use less powerful smartphones, particularly Android devices. Cross-platform development is a must. + id: totrans-13 prefs: [] type: TYPE_NORMAL - en: Making a food classifier app for one cuisine is tricky enough. Imagine having @@ -63,14 +77,17 @@ or a small team will quickly run into scaling issues trying to tackle this problem. In this chapter, we use this example as a motivation to explore the different parts of the mobile AI development life cycle that we explored in [Chapter 11](part0013.html#CCNA3-13fa565533764549a6f0ab7f11eed62b). + id: totrans-14 prefs: [] type: TYPE_NORMAL - en: The material we explore here does not need to be limited to smartphones, either. We can apply our learnings beyond mobile to edge devices such as Google Coral and Raspberry Pi, which we discuss later in the book. + id: totrans-15 prefs: [] type: TYPE_NORMAL - en: The Life Cycle of a Food Classifier App + id: totrans-16 prefs: - PREF_H1 type: TYPE_NORMAL @@ -78,54 +95,66 @@ It sounds like a daunting task, but we can break it down into manageable steps. As in life, we first need to crawl, then walk, and then run. The following is one potential approach to consider:' + id: totrans-17 prefs: [] type: TYPE_NORMAL - en: Collect a small initial set images for a single cuisine (e.g., Italian). + id: totrans-18 prefs: - PREF_OL type: TYPE_NORMAL - en: Label these images with their corresponding dish identifiers (e.g., `margherita_pizza`). + id: totrans-19 prefs: - PREF_OL type: TYPE_NORMAL - en: Train a classifier model. + id: totrans-20 prefs: - PREF_OL type: TYPE_NORMAL - en: Convert the model to a mobile framework-compatible format (e.g., *.tflite*). + id: totrans-21 prefs: - PREF_OL type: TYPE_NORMAL - en: Build a mobile app by integrating the model with a great UX. + id: totrans-22 prefs: - PREF_OL type: TYPE_NORMAL - en: Recruit alpha users and share the app with them. + id: totrans-23 prefs: - PREF_OL type: TYPE_NORMAL - en: Collect detailed usage metrics along with feedback from active users, including camera frames (which tend to reflect real-world usage) and corresponding proxy labels (indicating whether the classification was right or wrong). + id: totrans-24 prefs: - PREF_OL type: TYPE_NORMAL - en: Improve the model using the newly collected images as additional training data. This process needs to be iterative. + id: totrans-25 prefs: - PREF_OL type: TYPE_NORMAL - en: When the quality of the model meets the minimum quality bar, ship the app/feature to more/all users. Continue to monitor and improve the quality of the model for that cuisine. + id: totrans-26 prefs: - PREF_OL type: TYPE_NORMAL - en: Repeat these steps for each cuisine. + id: totrans-27 prefs: - PREF_OL type: TYPE_NORMAL - en: Tip + id: totrans-28 prefs: - PREF_H6 type: TYPE_NORMAL @@ -137,6 +166,7 @@ the worst case, if none of the options were correct, allow the user to manually add a new label. That photo, along with the label (in all three scenarios), can be incorporated as training data. + id: totrans-29 prefs: [] type: TYPE_NORMAL - en: We don’t need a whole lot of data to get underway. Although each of the aforementioned @@ -144,9 +174,11 @@ cool about this approach is that the more the app is used, the better it becomes, automatically. It’s as if it has a life of its own. We explore this self-evolving approach for a model toward the end of the chapter. + id: totrans-30 prefs: [] type: TYPE_NORMAL - en: Tip + id: totrans-31 prefs: - PREF_H6 type: TYPE_NORMAL @@ -160,6 +192,7 @@ and image frames from day-to-day use. We would recommend that your users know exactly what information you would be collecting about them and allow them to opt out or delete. Don’t be creepy! + id: totrans-32 prefs: [] type: TYPE_NORMAL - en: In this chapter, we explore the different parts of the aforementioned life cycle @@ -168,40 +201,51 @@ from this chapter, but also from the previous chapters, and combine them to see how we could effectively use them in building a production-quality, real-world application. + id: totrans-33 prefs: [] type: TYPE_NORMAL - en: Our journey begins with understanding the following tools from the Google ecosystem. + id: totrans-34 prefs: [] type: TYPE_NORMAL - en: TensorFlow Lite + id: totrans-35 prefs: [] type: TYPE_NORMAL - en: Model conversion and mobile inference engine. + id: totrans-36 prefs: [] type: TYPE_NORMAL - en: ML Kit + id: totrans-37 prefs: [] type: TYPE_NORMAL - en: High-level software development kit (SDK) with several built-in APIs, along with the ability to run custom TensorFlow Lite models as well as integration with Firebase on Google Cloud. + id: totrans-38 prefs: [] type: TYPE_NORMAL - en: Firebase + id: totrans-39 prefs: [] type: TYPE_NORMAL - en: A cloud-based framework that provides the necessary infrastructure for production-quality mobile applications, including analytics, crash reporting, A/B testing, push notifications, and more. + id: totrans-40 prefs: [] type: TYPE_NORMAL - en: TensorFlow Model Optimization Toolkit + id: totrans-41 prefs: [] type: TYPE_NORMAL - en: A set of tools for optimizing the size and performance of models. + id: totrans-42 prefs: [] type: TYPE_NORMAL - en: An Overview of TensorFlow Lite + id: totrans-43 prefs: - PREF_H1 type: TYPE_NORMAL @@ -211,13 +255,16 @@ this, the options within the TensorFlow ecosystem were porting the entire TensorFlow library itself to iOS (which was heavy and slow) and, later on, its slightly stripped-down version called TensorFlow Mobile (an improvement, but still fairly bulky). + id: totrans-44 prefs: [] type: TYPE_NORMAL - en: 'TensorFlow Lite is optimized from the ground up for mobile, with the following salient features:' + id: totrans-45 prefs: [] type: TYPE_NORMAL - en: Small + id: totrans-46 prefs: [] type: TYPE_NORMAL - en: TensorFlow Lite comes packaged with a much lighter interpreter. Even with all @@ -227,9 +274,11 @@ 1.5 MB. Additionally, TensorFlow Lite uses s*elective registration—*it packages only the operations that it knows will be used by the model, minimizing unnecessary overheads. + id: totrans-47 prefs: [] type: TYPE_NORMAL - en: Fast + id: totrans-48 prefs: [] type: TYPE_NORMAL - en: TensorFlow Lite provides a significant speedup because it is able to take advantage @@ -237,21 +286,28 @@ the Android ecosystem, it uses the Android Neural Networks API for acceleration. Analogously on iPhones, it uses the Metal API. Google claims two to seven times speedup over a range of tasks when using GPUs (relative to CPUs). + id: totrans-49 prefs: [] type: TYPE_NORMAL + zh: TensorFlow Lite提供了显著的加速,因为它能够利用设备上的硬件加速,如GPU和NPU(如果可用)。在Android生态系统中,它使用Android神经网络API进行加速。类似地,在iPhone上,它使用Metal + API。谷歌声称在使用GPU时(相对于CPU),在一系列任务中可以实现两到七倍的加速。 - en: TensorFlow uses Protocol Buffers (Protobufs) for deserialization/serialization. Protobufs are a powerful tool for representing data due to their flexibility and extensibility. However, that comes at a performance cost that can be felt on low-power devices such as mobiles. + id: totrans-50 prefs: [] type: TYPE_NORMAL + zh: TensorFlow使用Protocol Buffers(Protobufs)进行反序列化/序列化。Protobufs是一种表示数据的强大工具,由于其灵活性和可扩展性,但这会在低功耗设备(如移动设备)上产生性能成本。 - en: '*FlatBuffers* turned out to be the answer to this problem. Originally built for video game development, for which low overhead and high performance are a must, they proved to be a good solution for mobiles as well in significantly reducing the code footprint, memory usage, and CPU cycles spent for serialization and deserialization of models. This also improved start-up time by a fair amount.' + id: totrans-51 prefs: [] type: TYPE_NORMAL + zh: '*FlatBuffers*证明是解决这个问题的答案。最初为视频游戏开发构建,低开销和高性能是必须的,它们也被证明是移动设备的一个很好的解决方案,显著减少了代码占用空间、内存使用和用于模型序列化和反序列化的CPU周期。这也大大提高了启动时间。' - en: Within a network, there are some layers that have fixed computations at inference time; for example, the batch normalization layers, which can be precomputed because they rely on values obtained during training, such as mean and standard deviation. @@ -259,55 +315,82 @@ the previous layer’s computation ahead of time (i.e., during model conversion), thereby reducing the inference time and making the entire model much faster. This is known as *prefused activation*, which TensorFlow Lite supports. + id: totrans-52 prefs: [] type: TYPE_NORMAL + zh: 在网络中,有一些层在推断时具有固定的计算;例如,批量归一化层,可以预先计算,因为它们依赖于训练期间获得的值,如均值和标准差。因此,批量归一化层的计算可以与前一层的计算提前融合(即,在模型转换期间),从而减少推断时间,使整个模型更快。这被称为*预融合激活*,TensorFlow + Lite支持。 - en: The interpreter uses static memory and a static execution plan. This helps in decreasing the model load time. + id: totrans-53 prefs: [] type: TYPE_NORMAL + zh: 解释器使用静态内存和静态执行计划。这有助于减少模型加载时间。 - en: Fewer dependencies + id: totrans-54 prefs: [] type: TYPE_NORMAL + zh: 较少的依赖项 - en: The TensorFlow Lite codebase is mostly standard C/C++ with a minimal number of dependencies. It makes it easier to package and deploy and additionally reduces the size of the deployed package. + id: totrans-55 prefs: [] type: TYPE_NORMAL + zh: TensorFlow Lite的代码库主要是标准的C/C++,依赖项很少。这使得打包和部署更容易,同时还减小了部署包的大小。 - en: Supports custom operators + id: totrans-56 prefs: [] type: TYPE_NORMAL + zh: 支持自定义操作符 - en: TensorFlow Lite contains quantized and floating-point core operators, many of which have been tuned for mobile platforms, and can be used to create and run custom models. If TensorFlow Lite does not support an operation in our model, we can also write custom operators to get our model running. + id: totrans-57 prefs: [] type: TYPE_NORMAL + zh: TensorFlow Lite包含量化和浮点核心操作符,其中许多已经针对移动平台进行了调整,可以用于创建和运行自定义模型。如果TensorFlow Lite不支持我们模型中的某个操作,我们也可以编写自定义操作符来使我们的模型运行起来。 - en: Before we build our initial Android app, it would be useful to examine TensorFlow Lite’s architecture. + id: totrans-58 prefs: [] type: TYPE_NORMAL + zh: 在构建我们的初始Android应用程序之前,检查TensorFlow Lite的架构会很有用。 - en: TensorFlow Lite Architecture + id: totrans-59 prefs: - PREF_H2 type: TYPE_NORMAL + zh: TensorFlow Lite架构 - en: '[Figure 13-2](part0015.html#high-level_architecture_of_the_tensorflo) provides a high-level view of the TensorFlow Lite architecture.' + id: totrans-60 prefs: [] type: TYPE_NORMAL + zh: '[图13-2](part0015.html#high-level_architecture_of_the_tensorflo)提供了TensorFlow + Lite架构的高级视图。' - en: '![High-level architecture of the TensorFlow Lite ecosystem](../images/00242.jpeg)' + id: totrans-61 prefs: [] type: TYPE_IMG + zh: '![TensorFlow Lite生态系统的高级架构](../images/00242.jpeg)' - en: Figure 13-2\. High-level architecture of the TensorFlow Lite ecosystem + id: totrans-62 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图13-2. TensorFlow Lite生态系统的高级架构 - en: As app developers, we will be working in the topmost layer while interacting with the TensorFlow Lite API (or optionally with ML Kit, which in turn uses TensorFlow Lite). The TensorFlow Lite API abstracts away all of the complexities involved in using a lower-level API such as the Android’s Neural Network API. Recall that this is similar to how Core ML works within the Apple ecosystem. + id: totrans-63 prefs: [] type: TYPE_NORMAL + zh: 作为应用开发者,我们将在顶层层中与TensorFlow Lite API(或者选择与ML Kit交互,后者又使用TensorFlow Lite)进行交互。TensorFlow + Lite API将所有使用较低级API(如Android的神经网络API)时涉及的复杂性抽象化。请记住,这类似于Core ML在Apple生态系统中的工作方式。 - en: Looking at the other extreme, computations can be run on various types of hardware modules. The most common among them is the CPU, simply because of its ubiquity and flexibility. Modern smartphones are increasingly equipped with specialized @@ -315,14 +398,18 @@ computations like on the iPhone X). Additionally, Digital Signal Processors (DSP) specialize in singular tasks such as facial authentication, fingerprint authentication, and wake word detection (like “Hey Siri”). + id: totrans-64 prefs: [] type: TYPE_NORMAL + zh: 从另一个极端来看,计算可以在各种类型的硬件模块上运行。其中最常见的是CPU,仅仅因为其普遍性和灵活性。现代智能手机越来越配备了专门的模块,包括GPU和新的NPU(专门用于神经网络计算,如iPhone + X上的)。此外,数字信号处理器(DSP)专门用于单一任务,如面部认证、指纹认证和唤醒词检测(如“嘿Siri”)。 - en: In the world of Internet of Things (IoT), microcontrollers (MCUs) reign supreme. With no OS, no processor, and very little memory (KBs), these are cheap to produce in mass quantities and easy to incorporate into various applications. With TensorFlow Lite for Microcontrollers, developers can run AI on these bare-metal devices without needing internet connectivity. The pared-down version (roughly 20 KB) of the TensorFlow Lite Interpreter for MCUs is called the TensorFlow Lite Micro Interpreter. + id: totrans-65 prefs: [] type: TYPE_NORMAL - en: So how does TensorFlow Lite interact with hardware? By using delegates that @@ -332,6 +419,7 @@ of graph execution that would have otherwise been run on the CPU and instead run on the much more efficient GPUs and NPUs. On Android, the GPU delegate accelerates performance using OpenGL, whereas on iOS, the Metal API is used. + id: totrans-66 prefs: [] type: TYPE_NORMAL - en: Given that TensorFlow Lite by itself is platform agnostic, it needs to call @@ -341,12 +429,15 @@ above). The Neural Network API is designed to provide a base layer of functionality for higher-level machine learning frameworks. The equivalent of the Neural Network API within the Apple world is Metal Performance Shaders. + id: totrans-67 prefs: [] type: TYPE_NORMAL - en: With the information we have looked at so far, let’s get hands on. + id: totrans-68 prefs: [] type: TYPE_NORMAL - en: Model Conversion to TensorFlow Lite + id: totrans-69 prefs: - PREF_H1 type: TYPE_NORMAL @@ -354,53 +445,67 @@ on ImageNet or custom trained in Keras). Before we can plug that model into an Android app, we need to convert it to the TensorFlow Lite format (a .*tflite* file). + id: totrans-70 prefs: [] type: TYPE_NORMAL - en: 'Let’s take a look at how to convert the model using the TensorFlow Lite Converter tool, the `tflite_convert` command that comes bundled with our TensorFlow installation:' + id: totrans-71 prefs: [] type: TYPE_NORMAL - en: '[PRE0]' + id: totrans-72 prefs: [] type: TYPE_PRE + zh: '[PRE0]' - en: The output of this command is the new *my_model.tflite* file, which we can then plug into the Android app in the following section. Later, we look at how to make that model more performant by using the `tflite_convert` tool again. Additionally, the TensorFlow Lite team has created many pretrained models that are available in TensorFlow Lite format, saving us this conversation step. + id: totrans-73 prefs: [] type: TYPE_NORMAL - en: Building a Real-Time Object Recognition App + id: totrans-74 prefs: - PREF_H1 type: TYPE_NORMAL - en: 'Running the sample app from the TensorFlow repository is an easy way to play with the TensorFlow Lite API. Note that we would need an Android phone or tablet to run the app. Following are steps to build and deploy the app:' + id: totrans-75 prefs: [] type: TYPE_NORMAL - en: 'Clone the TensorFlow repository:' + id: totrans-76 prefs: - PREF_OL type: TYPE_NORMAL - en: '[PRE1]' + id: totrans-77 prefs: - PREF_IND type: TYPE_PRE + zh: '[PRE1]' - en: Download and install Android Studio from [*https://developer.android.com/studio*](https://developer.android.com/studio). + id: totrans-78 prefs: - PREF_OL type: TYPE_NORMAL - en: Open Android Studio and then select “Open an existing Android Studio project” ([Figure 13-3](part0015.html#start_screen_of_android_studio)). + id: totrans-79 prefs: - PREF_OL type: TYPE_NORMAL - en: '![Start screen of Android Studio](../images/00020.jpeg)' + id: totrans-80 prefs: - PREF_IND type: TYPE_IMG - en: Figure 13-3\. Start screen of Android Studio + id: totrans-81 prefs: - PREF_IND - PREF_H6 @@ -408,15 +513,18 @@ - en: Go to the location of the cloned TensorFlow repository and then navigate further to *tensorflow/tensorflow/contrib/lite/java/demo/* ([Figure 13-4](part0015.html#android_studio_quotation_markopen_existi)). Select Open. + id: totrans-82 prefs: - PREF_OL type: TYPE_NORMAL - en: '![Android Studio “Open Existing Project” screen in the TensorFlow repository](../images/00308.jpeg)' + id: totrans-83 prefs: - PREF_IND type: TYPE_IMG - en: Figure 13-4\. Android Studio “Open Existing Project” screen in the TensorFlow repository + id: totrans-84 prefs: - PREF_IND - PREF_H6 @@ -424,189 +532,260 @@ - en: On the Android device, enable Developer Options. (Note that we used a Pixel device here, which uses the stock Android OS. For other manufacturers, the instructions might be a little different.) + id: totrans-85 prefs: - PREF_OL type: TYPE_NORMAL - en: Go to Settings. + id: totrans-86 prefs: - PREF_IND - PREF_OL type: TYPE_NORMAL - en: Scroll down to the About Phone or About Tablet option ([Figure 13-5](part0015.html#system_information_screen_on_an_android)) and select it. + id: totrans-87 prefs: - PREF_IND - PREF_OL type: TYPE_NORMAL - en: '![System information screen on an Android phone; select the About Phone option here](../images/00253.jpeg)' + id: totrans-88 prefs: - PREF_IND - PREF_IND type: TYPE_IMG + zh: '![Android手机上的系统信息屏幕;在此处选择“关于手机”选项](../images/00253.jpeg)' - en: Figure 13-5\. System information screen on an Android phone; select the About Phone option here + id: totrans-89 prefs: - PREF_IND - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-5。Android手机上的系统信息屏幕;在此处选择“关于手机”选项 - en: Look for the Build Number row and tap it seven times. (Yeah, you read that right—seven!) + id: totrans-90 prefs: - PREF_IND - PREF_OL type: TYPE_NORMAL + zh: 查找“构建号”行并点击七次。(是的,你没看错——七次!) - en: You should see a message ([Figure 13-6](part0015.html#the_about_phone_screen_on_an_android_dev)) confirming that developer mode is enabled. + id: totrans-91 prefs: - PREF_IND - PREF_OL type: TYPE_NORMAL + zh: 您应该看到一条消息([图13-6](part0015.html#the_about_phone_screen_on_an_android_dev)),确认开发者模式已启用。 - en: '![The About Phone screen on an Android device](../images/00050.jpeg)' + id: totrans-92 prefs: - PREF_IND - PREF_IND type: TYPE_IMG + zh: '![Android设备上的“关于手机”屏幕](../images/00050.jpeg)' - en: Figure 13-6\. The About Phone screen on an Android device + id: totrans-93 prefs: - PREF_IND - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-6。Android设备上的“关于手机”屏幕 - en: If you are using a phone, tap the back button to go back to the previous menu. + id: totrans-94 prefs: - PREF_IND - PREF_OL type: TYPE_NORMAL + zh: 如果您正在使用手机,请点击返回按钮返回到上一个菜单。 - en: You should see a “Developer options” button, directly above the “About phone” or “About tablet” option ([Figure 13-7](part0015.html#the_system_information_screen_showing_qu)). Tab this button to reveal the “Developer options” menu ([Figure 13-8](part0015.html#quotation_markdeveloper_optionsquotation)). + id: totrans-95 prefs: - PREF_IND - PREF_OL type: TYPE_NORMAL + zh: 您应该看到一个“开发者选项”按钮,直接位于“关于手机”或“关于平板电脑”选项的上方([图13-7](part0015.html#the_system_information_screen_showing_qu))。点击此按钮以显示“开发者选项”菜单([图13-8](part0015.html#quotation_markdeveloper_optionsquotation))。 - en: '![The System information screen showing “Developer options” enabled](../images/00241.jpeg)' + id: totrans-96 prefs: - PREF_IND - PREF_IND type: TYPE_IMG + zh: '![显示“开发者选项”已启用的系统信息屏幕](../images/00241.jpeg)' - en: Figure 13-7\. The System information screen showing “Developer options” enabled + id: totrans-97 prefs: - PREF_IND - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-7。显示“开发者选项”已启用的系统信息屏幕 - en: '![“Developer options” screen on an Android device with USB debugging enabled](../images/00150.jpeg)' + id: totrans-98 prefs: - PREF_IND - PREF_IND type: TYPE_IMG + zh: '![启用USB调试的Android设备上的“开发者选项”屏幕](../images/00150.jpeg)' - en: Figure 13-8\. “Developer options” screen on an Android device with USB debugging enabled + id: totrans-99 prefs: - PREF_IND - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-8。启用USB调试的Android设备上的“开发者选项”屏幕 - en: Plug the Android device into the computer via a USB cable. + id: totrans-100 prefs: - PREF_OL type: TYPE_NORMAL + zh: 通过USB电缆将Android设备连接到计算机。 - en: The Android device might show a message asking to allow USB debugging. Enable “Always allow this computer,” and then select OK ([Figure 13-9](part0015.html#allow_usb_debugging_on_the_displayed_ale)). + id: totrans-101 prefs: - PREF_OL type: TYPE_NORMAL + zh: Android设备可能会显示一条消息,要求允许USB调试。启用“始终允许此计算机”,然后选择确定([图13-9](part0015.html#allow_usb_debugging_on_the_displayed_ale))。 - en: '![Allow USB debugging on the displayed alert](../images/00103.jpeg)' + id: totrans-102 prefs: - PREF_IND type: TYPE_IMG + zh: '![在显示的警报上允许USB调试](../images/00103.jpeg)' - en: Figure 13-9\. Allow USB debugging on the displayed alert + id: totrans-103 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-9。在显示的警报上允许USB调试 - en: In Android Studio, on the Debug toolbar bar ([Figure 13-10](part0015.html#debug_toolbar_in_android_studio)), click the Run App button (the right-facing triangle). + id: totrans-104 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在Android Studio中,在调试工具栏上([图13-10](part0015.html#debug_toolbar_in_android_studio)),点击“运行应用”按钮(右向三角形)。 - en: '![Debug toolbar in Android Studio](../images/00066.jpeg)' + id: totrans-105 prefs: - PREF_IND type: TYPE_IMG + zh: '![Android Studio中的调试工具栏](../images/00066.jpeg)' - en: Figure 13-10\. Debug toolbar in Android Studio + id: totrans-106 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-10。Android Studio中的调试工具栏 - en: A window opens displaying all the available devices and emulators ([Figure 13-11](part0015.html#select_the_phone_from_the_deployment_tar)). Choose your device, and then select OK. + id: totrans-107 prefs: - PREF_OL type: TYPE_NORMAL + zh: 一个窗口打开,显示所有可用的设备和模拟器([图13-11](part0015.html#select_the_phone_from_the_deployment_tar))。选择您的设备,然后选择确定。 - en: '![Select the phone from the deployment target selection screen](../images/00023.jpeg)' + id: totrans-108 prefs: - PREF_IND type: TYPE_IMG + zh: '![从部署目标选择屏幕中选择手机](../images/00023.jpeg)' - en: Figure 13-11\. Select the phone from the deployment target selection screen + id: totrans-109 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-11。从部署目标选择屏幕中选择手机 - en: The app should install and begin running on our phone. + id: totrans-110 prefs: - PREF_OL type: TYPE_NORMAL + zh: 应用程序应该安装并开始在我们的手机上运行。 - en: The app will request permission for your camera; go ahead and grant it permission. + id: totrans-111 prefs: - PREF_OL type: TYPE_NORMAL + zh: 应用程序将请求您的相机权限;请继续授予权限。 - en: A live view of the camera should appear, along with real-time predictions of object classification, plus the number of seconds it took to make the prediction, as shown in [Figure 13-12](part0015.html#the_app_up-and-running_appcomma_showing). + id: totrans-112 prefs: - PREF_OL type: TYPE_NORMAL + zh: 相机的实时视图应该出现,以及实时对象分类预测,以及进行预测所需的秒数,如[图13-12](part0015.html#the_app_up-and-running_appcomma_showing)所示。 - en: '![The app up-and-running app, showing real-time predictions](../images/00161.jpeg)' + id: totrans-113 prefs: - PREF_IND type: TYPE_IMG + zh: '![应用程序正在运行,显示实时预测](../images/00161.jpeg)' - en: Figure 13-12\. The app up-and-running app, showing real-time predictions + id: totrans-114 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-12。应用程序正在运行,显示实时预测 - en: And there you have it! We have a basic app running on the phone that takes video frames and classifies them. It’s simple and it works reasonably well. + id: totrans-115 prefs: [] type: TYPE_NORMAL + zh: 就是这样!我们在手机上运行了一个基本的应用程序,它可以拍摄视频帧并对其进行分类。它简单而且运行得相当不错。 - en: 'Beyond object classification, the TensorFlow Lite repository also has sample apps (iOS and Android) for many other AI problems, including the following:' + id: totrans-116 prefs: [] type: TYPE_NORMAL + zh: 除了对象分类之外,TensorFlow Lite存储库还有许多其他AI问题的示例应用程序(iOS和Android),包括以下内容: - en: Object detection + id: totrans-117 prefs: - PREF_UL type: TYPE_NORMAL + zh: 对象检测 - en: Pose estimation + id: totrans-118 prefs: - PREF_UL type: TYPE_NORMAL + zh: 姿势估计 - en: Gesture recognition + id: totrans-119 prefs: - PREF_UL type: TYPE_NORMAL + zh: 手势识别 - en: Speech recognition + id: totrans-120 prefs: - PREF_UL type: TYPE_NORMAL + zh: 语音识别 - en: The great thing about having these sample apps is that with basic instructions, someone without a mobile development background can get them running on a phone. Even better, if we have a custom trained model, we can plug it into the app and see it run for our custom task. + id: totrans-121 prefs: [] type: TYPE_NORMAL + zh: 拥有这些示例应用程序的好处是,只要有基本的说明,没有移动开发背景的人就可以在手机上运行它们。更好的是,如果我们有一个自定义训练的模型,我们可以将其插入应用程序并看到它为我们的自定义任务运行。 - en: This is great for starting out. However, things are a lot more complicated in the real world. Developers of serious real-world applications with thousands or even millions of users need to think beyond just inference—like updating and distributing @@ -615,9 +794,11 @@ Doing all of this in-house can be expensive, time consuming, and frankly unnecessary. Naturally, platforms that provide these features would be enticing. This is where ML Kit and Firebase come in. + id: totrans-122 prefs: [] type: TYPE_NORMAL - en: ML Kit + Firebase + id: totrans-123 prefs: - PREF_H1 type: TYPE_NORMAL @@ -626,20 +807,25 @@ ML tasks. By default, ML Kit comes with a generic feature set in vision and language intelligence. [Table 4-1](part0006.html#top_1percent_accuracy_and_feature_length) lists some of the common ML tasks that we can do in just a few lines. + id: totrans-124 prefs: [] type: TYPE_NORMAL - en: Table 13-1\. ML Kit built-in features + id: totrans-125 prefs: [] type: TYPE_NORMAL - en: '| **Vision** | **Language** |' + id: totrans-126 prefs: [] type: TYPE_TB - en: '| --- | --- |' + id: totrans-127 prefs: [] type: TYPE_TB - en: '| Object classificationObject detection and trackingPopular landmark detectionText recognitionFace detectionBarcode detection | Language identificationOn-device translationSmart replies |' + id: totrans-128 prefs: [] type: TYPE_TB - en: ML Kit also gives us the ability to use custom trained TensorFlow Lite models @@ -653,43 +839,53 @@ announce the labels in French. It is entirely possible to build these relatively quickly using ML Kit. Although many of these features are available in Core ML, too, ML Kit has the added advantage of being cross-platform. + id: totrans-129 prefs: [] type: TYPE_NORMAL - en: 'ML Kit, though, is only one piece of the puzzle. It integrates into Google’s Firebase, the mobile and web application development platform that is part of Google Cloud. Firebase offers an array of features that are necessary infrastructure for production-quality apps, such as the following:' + id: totrans-130 prefs: [] type: TYPE_NORMAL - en: Push notifications + id: totrans-131 prefs: - PREF_UL type: TYPE_NORMAL - en: Authentication + id: totrans-132 prefs: - PREF_UL type: TYPE_NORMAL - en: Crash reporting + id: totrans-133 prefs: - PREF_UL type: TYPE_NORMAL - en: Logging + id: totrans-134 prefs: - PREF_UL type: TYPE_NORMAL - en: Performance monitoring + id: totrans-135 prefs: - PREF_UL type: TYPE_NORMAL - en: Hosted device testing + id: totrans-136 prefs: - PREF_UL type: TYPE_NORMAL - en: A/B testing + id: totrans-137 prefs: - PREF_UL type: TYPE_NORMAL - en: Model management + id: totrans-138 prefs: - PREF_UL type: TYPE_NORMAL @@ -699,9 +895,11 @@ reference the model on ML Kit inside the app, and we’re good to go. The A/B testing feature gives us the ability to show different users different versions of the same model and measure the performance across the different models. + id: totrans-139 prefs: [] type: TYPE_NORMAL - en: Note + id: totrans-140 prefs: - PREF_H6 type: TYPE_NORMAL @@ -712,95 +910,137 @@ larger taxonomy (like thousands of object classes instead of hundreds). In fact, some functionality such as the landmark recognition feature works only on the cloud. + id: totrans-141 prefs: [] type: TYPE_NORMAL - en: The cloud processing option is particularly useful when we need a little bit of extra accuracy and/or the user’s phone has low processing power that prevents it from running the on-device model well. + id: totrans-142 prefs: [] type: TYPE_NORMAL - en: Object Classification in ML Kit + id: totrans-143 prefs: - PREF_H2 type: TYPE_NORMAL - en: 'For our previous task of object classification in real time, if we use ML Kit instead of vanilla TensorFlow Lite, we can simplify our code to just the following lines (in Kotlin):' + id: totrans-144 prefs: [] type: TYPE_NORMAL - en: '[PRE2]' + id: totrans-145 prefs: [] type: TYPE_PRE + zh: '[PRE2]' - en: Custom Models in ML Kit + id: totrans-146 prefs: - PREF_H2 type: TYPE_NORMAL + zh: ML Kit中的自定义模型 - en: 'In addition to the prebuilt models provided by ML Kit, we can also run our own custom models. These models must be in the TensorFlow Lite format. Following is a simple piece of code to load a custom model that’s bundled into the app:' + id: totrans-147 prefs: [] type: TYPE_NORMAL + zh: 除了ML Kit提供的预构建模型外,我们还可以运行自己的自定义模型。这些模型必须是TensorFlow Lite格式。以下是一个简单的代码片段,用于加载打包到应用中的自定义模型: - en: '[PRE3]' + id: totrans-148 prefs: [] type: TYPE_PRE + zh: '[PRE3]' - en: 'Next, we specify the model’s input and output configuration (for a model that takes in an RGB image of size 224x224 and gives predictions for 1,000 class names):' + id: totrans-149 prefs: [] type: TYPE_NORMAL + zh: 接下来,我们指定模型的输入和输出配置(对于一个接收尺寸为224x224的RGB图像并为1,000个类别名称提供预测的模型): - en: '[PRE4]' + id: totrans-150 prefs: [] type: TYPE_PRE + zh: '[PRE4]' - en: 'Next, we create an array of a single image and normalize each pixel to the range [–1,1]:' + id: totrans-151 prefs: [] type: TYPE_NORMAL + zh: 接下来,我们创建一个单个图像数组,并将每个像素归一化到范围[-1,1]: - en: '[PRE5]' + id: totrans-152 prefs: [] type: TYPE_PRE + zh: '[PRE5]' - en: 'Now, we set up an interpreter based on our custom model:' + id: totrans-153 prefs: [] type: TYPE_NORMAL + zh: 现在,我们基于我们的自定义模型设置一个解释器: - en: '[PRE6]' + id: totrans-154 prefs: [] type: TYPE_PRE + zh: '[PRE6]' - en: 'Next, we run our input batch on the interpreter:' + id: totrans-155 prefs: [] type: TYPE_NORMAL + zh: 接下来,我们在解释器上运行我们的输入批处理: - en: '[PRE7]' + id: totrans-156 prefs: [] type: TYPE_PRE + zh: '[PRE7]' - en: 'Yup, it’s really that simple! Here, we’ve seen how we can bundle custom models along with the app. Sometimes, we might want the app to dynamically download the model from the cloud for reasons such as the following:' + id: totrans-157 prefs: [] type: TYPE_NORMAL + zh: 是的,就是这么简单!在这里,我们看到了如何将自定义模型与应用捆绑在一起。有时,我们可能希望应用动态从云端下载模型,原因如下: - en: We want to keep the default app size small on the Play Store so as to not prevent users with data usage constraints from downloading our app. + id: totrans-158 prefs: - PREF_UL type: TYPE_NORMAL + zh: 我们希望在Play商店上保持默认应用大小较小,以免阻止有数据使用限制的用户下载我们的应用。 - en: We want to experiment with a different variety of models and pick the best one based on the available metrics. + id: totrans-159 prefs: - PREF_UL type: TYPE_NORMAL + zh: 我们希望尝试不同种类的模型,并根据可用的指标选择最佳模型。 - en: We want the user to have the latest and greatest model, without having to go through the whole app release process. + id: totrans-160 prefs: - PREF_UL type: TYPE_NORMAL + zh: 我们希望用户拥有最新和最好的模型,而无需经历整个应用发布流程。 - en: The feature that needs the model might be optional, and we want to conserve space on the user’s device. + id: totrans-161 prefs: - PREF_UL type: TYPE_NORMAL + zh: 需要模型的功能可能是可选的,我们希望节省用户设备上的空间。 - en: This brings us to hosted models. + id: totrans-162 prefs: [] type: TYPE_NORMAL + zh: 这带我们来到了托管模型。 - en: Hosted Models + id: totrans-163 prefs: - PREF_H2 type: TYPE_NORMAL + zh: 托管模型 - en: ML Kit, along with Firebase, gives us the ability to upload and store our model on Google Cloud and download it from the app when needed. After the model is downloaded, it functions exactly like it would have if we had bundled the model into the app. @@ -808,249 +1048,354 @@ having to do an entire release of the app. Also, it lets us do experiments with our models to see which ones perform best in the real world. For hosted models, there are two aspects that we need to look at. + id: totrans-164 prefs: [] type: TYPE_NORMAL + zh: ML Kit与Firebase一起,使我们能够在Google Cloud上上传和存储我们的模型,并在需要时从应用中下载。模型下载后,其功能与将模型捆绑到应用中完全相同。此外,它还为我们提供了推送模型更新的能力,而无需对应用进行整体发布。此外,它还让我们可以在真实世界中对我们的模型进行实验,以查看哪些模型在实际中表现最佳。对于托管模型,我们需要看两个方面。 - en: Accessing a hosted model + id: totrans-165 prefs: - PREF_H3 type: TYPE_NORMAL + zh: 访问托管模型 - en: The following lines inform Firebase that we’d like to use the model named `my_remote_custom_model:` + id: totrans-166 prefs: [] type: TYPE_NORMAL + zh: 以下行通知Firebase我们想使用名为`my_remote_custom_model`的模型: - en: '[PRE8]' + id: totrans-167 prefs: [] type: TYPE_PRE + zh: '[PRE8]' - en: Notice that we set `enableModelUpdates` to enable us to push updates to the model from the cloud to the device. We can also optionally configure the conditions under which the model would be downloaded for the first time versus every subsequent time—whether the device is idle, whether it’s currently charging, and whether the download is restricted to WiFi networks only. + id: totrans-168 prefs: [] type: TYPE_NORMAL + zh: 请注意,我们将`enableModelUpdates`设置为使我们能够从云端向设备推送模型更新。我们还可以选择配置模型首次下载的条件与每次下载的条件——设备是否空闲,当前是否正在充电,下载是否仅限于WiFi网络等。 - en: 'Next, we set up an interpreter much like we did with our local model:' + id: totrans-169 prefs: [] type: TYPE_NORMAL + zh: 接下来,我们设置一个解释器,就像我们在本地模型中所做的那样: - en: '[PRE9]' + id: totrans-170 prefs: [] type: TYPE_PRE + zh: '[PRE9]' - en: After this point, the code to perform the prediction would look exactly the same as that for local models. + id: totrans-171 prefs: [] type: TYPE_NORMAL + zh: 在此之后,执行预测的代码看起来与本地模型的代码完全相同。 - en: Next, we discuss the other aspect to hosted models—uploading the model. + id: totrans-172 prefs: [] type: TYPE_NORMAL + zh: 接下来,我们讨论托管模型的另一个方面——上传模型。 - en: Uploading a hosted model + id: totrans-173 prefs: - PREF_H3 type: TYPE_NORMAL + zh: 上传托管模型 - en: As of this writing, Firebase supports only models hosted on GCP. In this section, we walk through the simple process of creating, uploading, and storing a hosted model. This subsection presumes that we already have an existing GCP account. + id: totrans-174 prefs: [] type: TYPE_NORMAL + zh: 截至撰写本文时,Firebase仅支持托管在GCP上的模型。在本节中,我们将介绍创建、上传和存储托管模型的简单过程。本小节假设我们已经拥有现有的GCP帐户。 - en: 'The following lists the steps that we need to take to get a model hosted on the cloud:' + id: totrans-175 prefs: [] type: TYPE_NORMAL + zh: 以下列出了我们需要采取的步骤来将模型托管在云端: - en: Go to [*https://console.firebase.google.com*](https://console.firebase.google.com). Select an existing project or add a new one ([Figure 13-13](part0015.html#homepage_of_google_cloud_firebase)). + id: totrans-176 prefs: - PREF_OL type: TYPE_NORMAL + zh: 前往[*https://console.firebase.google.com*](https://console.firebase.google.com)。选择一个现有项目或添加一个新项目([图13-13](part0015.html#homepage_of_google_cloud_firebase))。 - en: '![Home page of Google Cloud Firebase](../images/00279.jpeg)' + id: totrans-177 prefs: - PREF_IND type: TYPE_IMG + zh: '![Google Cloud Firebase的主页](../images/00279.jpeg)' - en: Figure 13-13\. Home page of Google Cloud Firebase + id: totrans-178 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-13。Google Cloud Firebase的主页 - en: On the Project Overview screen, create an Android app ([Figure 13-14](part0015.html#the_project_overview_screen_on_google_cl)). + id: totrans-179 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在项目概述屏幕上,创建一个Android应用([图13-14](part0015.html#the_project_overview_screen_on_google_cl))。 - en: '![The Project Overview screen on Google Cloud Firebase](../images/00229.jpeg)' + id: totrans-180 prefs: - PREF_IND type: TYPE_IMG + zh: '![Google Cloud Firebase上的项目概述屏幕](../images/00229.jpeg)' - en: Figure 13-14\. The Project Overview screen on Google Cloud Firebase + id: totrans-181 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-14。Google Cloud Firebase上的项目概述屏幕 - en: Use the app ID from the project in Android Studio ([Figure 13-15](part0015.html#app_creation_screen_on_firebase)). + id: totrans-182 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在Android Studio中使用项目的应用ID([图13-15](part0015.html#app_creation_screen_on_firebase))。 - en: '![App creation screen on Firebase](../images/00260.jpeg)' + id: totrans-183 prefs: - PREF_IND type: TYPE_IMG + zh: '![Firebase上的应用创建屏幕](../images/00260.jpeg)' - en: Figure 13-15\. App creation screen on Firebase + id: totrans-184 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-15。Firebase上的应用创建屏幕 - en: After clicking “Register app,” download the configuration file. This configuration file gives the necessary credentials to the app for it to access our cloud account. Add the configuration file and the Firebase SDK to the Android app as shown on the app creation page. + id: totrans-185 prefs: - PREF_OL type: TYPE_NORMAL + zh: 点击“注册应用程序”后,下载配置文件。这个配置文件为应用程序提供了访问我们云账户所需的凭据。按照应用程序创建页面上显示的方式将配置文件和Firebase + SDK添加到Android应用程序中。 - en: In the ML Kit section, select Get Started, and then select “Add custom model” ([Figure 13-16](part0015.html#the_ml_kit_custom_models_tab)). + id: totrans-186 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在ML Kit部分,选择开始,然后选择“添加自定义模型”([图13-16](part0015.html#the_ml_kit_custom_models_tab))。 - en: '![The ML Kit custom models tab](../images/00158.jpeg)' + id: totrans-187 prefs: - PREF_IND type: TYPE_IMG + zh: '![ML Kit自定义模型选项卡](../images/00158.jpeg)' - en: Figure 13-16\. The ML Kit custom models tab + id: totrans-188 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-16。ML Kit自定义模型选项卡 - en: In the name field, enter `my_remote_custom_model` to match the name in the code. + id: totrans-189 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在名称字段中,输入`my_remote_custom_model`以匹配代码中的名称。 - en: Upload the model file from your computer ([Figure 13-17](part0015.html#uploading_a_tensorflow_lite_model_file_t)). + id: totrans-190 prefs: - PREF_OL type: TYPE_NORMAL + zh: 从计算机上传模型文件([图13-17](part0015.html#uploading_a_tensorflow_lite_model_file_t))。 - en: '![Uploading a TensorFlow Lite model file to Firebase](../images/00106.jpeg)' + id: totrans-191 prefs: - PREF_IND type: TYPE_IMG + zh: '![将TensorFlow Lite模型文件上传到Firebase](../images/00106.jpeg)' - en: Figure 13-17\. Uploading a TensorFlow Lite model file to Firebase + id: totrans-192 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-17。将TensorFlow Lite模型文件上传到Firebase - en: Tap the “Publish” button after the file upload completes. + id: totrans-193 prefs: - PREF_OL type: TYPE_NORMAL + zh: 文件上传完成后,点击“发布”按钮。 - en: That’s it! Our model is now ready to be accessed and used from the app dynamically. Next, we examine how we can do A/B testing between models using Firebase. + id: totrans-194 prefs: [] type: TYPE_NORMAL + zh: 就是这样!我们的模型现在已经准备好从应用程序动态访问和使用。接下来,我们将探讨如何使用Firebase在模型之间进行A/B测试。 - en: A/B Testing Hosted Models + id: totrans-195 prefs: - PREF_H2 type: TYPE_NORMAL + zh: A/B测试托管模型 - en: Let’s take the scenario in which we had a version 1 model named `my_model_v1` to start off with and deployed it to our users. After some usage by them, we obtained more data that we were able to train on. The result of this training was `my_model_v2` ([Figure 13-18](part0015.html#currently_uploaded_custom_models_to_fire)). We want to assess whether this new version would give us better results. This is where A/B testing comes in. + id: totrans-196 prefs: [] type: TYPE_NORMAL + zh: 让我们假设我们有一个名为`my_model_v1`的版本1模型,最初部署给我们的用户。在用户使用一段时间后,我们获得了更多数据可以进行训练。这次训练的结果是`my_model_v2`([图13-18](part0015.html#currently_uploaded_custom_models_to_fire))。我们想评估这个新版本是否会给我们带来更好的结果。这就是A/B测试的用武之地。 - en: '![Currently uploaded custom models to Firebase](../images/00070.jpeg)' + id: totrans-197 prefs: [] type: TYPE_IMG + zh: '![当前上传的自定义模型到Firebase](../images/00070.jpeg)' - en: Figure 13-18\. Currently uploaded custom models to Firebase + id: totrans-198 prefs: - PREF_H6 type: TYPE_NORMAL + zh: 图13-18。当前上传的自定义模型到Firebase - en: 'Widely used by industry, A/B testing is a statistical hypothesis testing technique, which answers the question “Is B better than A?” Here A and B could be anything of the same kind: content on a website, design elements on a phone app, or even deep learning models. A/B testing is a really useful feature for us when actively developing a model and discovering how our users respond to different iterations of the model.' + id: totrans-199 prefs: [] type: TYPE_NORMAL + zh: A/B测试被行业广泛使用,是一种统计假设检验技术,回答了“B是否比A更好?”的问题。这里的A和B可以是同类的任何东西:网站上的内容、手机应用程序上的设计元素,甚至是深度学习模型。在积极开发模型并发现用户对模型不同迭代的反应时,A/B测试是一个非常有用的功能。 - en: 'Users have been using `my_model_v1` for some time now, and we’d like to see whether the v2 iteration has our users going gaga. We’d like to start slow; maybe just 10% of our users should get v2\. For that, we can set up an A/B testing experiment as follows:' + id: totrans-200 prefs: [] type: TYPE_NORMAL + zh: 用户已经使用`my_model_v1`一段时间了,我们想看看v2版本是否让我们的用户疯狂。我们想慢慢开始;也许只有10%的用户应该得到v2。为此,我们可以设置一个A/B测试实验如下: - en: In Firebase, click the A/B Testing section, and then select “Create experiment” ([Figure 13-19](part0015.html#asolidusb_testing_screen_in_firebase_whe)). + id: totrans-201 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在Firebase中,点击A/B测试部分,然后选择“创建实验”([图13-19](part0015.html#asolidusb_testing_screen_in_firebase_whe))。 - en: '![A/B testing screen in Firebase where we can create an experiment](../images/00029.jpeg)' + id: totrans-202 prefs: - PREF_IND type: TYPE_IMG + zh: '![在Firebase中进行A/B测试的屏幕,我们可以创建一个实验](../images/00029.jpeg)' - en: Figure 13-19\. A/B testing screen in Firebase where we can create an experiment + id: totrans-203 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-19。在Firebase中进行A/B测试的屏幕,我们可以创建一个实验 - en: Select the Remote Config option. + id: totrans-204 prefs: - PREF_OL type: TYPE_NORMAL + zh: 选择远程配置选项。 - en: In the Basics section, in the “Experiment name” box, enter the experiment name and an optional description ([Figure 13-20](part0015.html#the_basics_section_of_the_screen_to_crea)), and then click Next. + id: totrans-205 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在基础部分中,在“实验名称”框中输入实验名称和可选描述([图13-20](part0015.html#the_basics_section_of_the_screen_to_crea)),然后点击下一步。 - en: '![The Basics section of the screen to create a remote configuration experiment](../images/00315.jpeg)' + id: totrans-206 prefs: - PREF_IND type: TYPE_IMG + zh: '![创建远程配置实验屏幕的基础部分](../images/00315.jpeg)' - en: Figure 13-20\. The Basics section of the screen to create a remote configuration experiment + id: totrans-207 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-20。创建远程配置实验屏幕的基础部分 - en: In the Targeting section that opens, from the “Target users” drop-down menu, select our app and enter the percentage of target users ([Figure 13-21](part0015.html#the_targeting_section_of_the_remote_conf)). + id: totrans-208 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在打开的定位部分中,从“目标用户”下拉菜单中选择我们的应用程序,并输入目标用户的百分比([图13-21](part0015.html#the_targeting_section_of_the_remote_conf))。 - en: '![The Targeting section of the Remote Config screen](../images/00274.jpeg)' + id: totrans-209 prefs: - PREF_IND type: TYPE_IMG + zh: '![远程配置屏幕的定位部分](../images/00274.jpeg)' - en: Figure 13-21\. The Targeting section of the Remote Config screen + id: totrans-210 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL + zh: 图13-21。远程配置屏幕的定位部分 - en: Select a goal metric that makes sense. We discuss this in a little more detail in the next section. + id: totrans-211 prefs: - PREF_OL type: TYPE_NORMAL + zh: 选择一个有意义的目标指标。我们将在下一节中更详细地讨论这个问题。 - en: In the variants section ([Figure 13-22](part0015.html#the_variants_section_of_the_remote_confi)), create a new parameter called `model_name` that reflects the name of the model a particular user would use. The control group gets the default model, which is `my_model_v1`. We also create an additional variant with the name `my_model_v2,` which goes to 10% of the users. + id: totrans-212 prefs: - PREF_OL type: TYPE_NORMAL + zh: 在变体部分([图13-22](part0015.html#the_variants_section_of_the_remote_confi)),创建一个名为`model_name`的新参数,反映了特定用户将使用的模型的名称。对照组使用默认模型,即`my_model_v1`。我们还创建了一个名为`my_model_v2`的额外变体,分配给10%的用户。 - en: '![The Variants section of the Remote Config screen](../images/00234.jpeg)' + id: totrans-213 prefs: - PREF_IND type: TYPE_IMG + zh: '![远程配置屏幕的变体部分](../images/00234.jpeg)' - en: Figure 13-22\. The Variants section of the Remote Config screen + id: totrans-214 prefs: - PREF_IND - PREF_H6 type: TYPE_NORMAL - en: Select Review and then select “Start experiment.” Over time, we can increase the distribution of users using the variant. + id: totrans-215 prefs: - PREF_OL type: TYPE_NORMAL - en: Ta-da! Now we have our experiment up and running. + id: totrans-216 prefs: [] type: TYPE_NORMAL - en: Measuring an experiment + id: totrans-217 prefs: - PREF_H3 type: TYPE_NORMAL @@ -1059,12 +1404,15 @@ days to a few weeks. The success of the experiments can be determined by any number of criteria. Google provides a few metrics out of the box that we can use, as shown in [Figure 13-23](part0015.html#analytics_available_when_setting_up_an_a). + id: totrans-218 prefs: [] type: TYPE_NORMAL - en: '![Analytics available when setting up an A/B testing experiment](../images/00198.jpeg)' + id: totrans-219 prefs: [] type: TYPE_IMG - en: Figure 13-23\. Analytics available when setting up an A/B testing experiment + id: totrans-220 prefs: - PREF_H6 type: TYPE_NORMAL @@ -1076,9 +1424,11 @@ conclude the opposite if there were no increase/decrease in revenue per user. For successful experiments, we want to slowly roll them out to all users. At that point, it ceases to be an experiment and it “graduates” to become a core offering. + id: totrans-221 prefs: [] type: TYPE_NORMAL - en: Using the Experiment in Code + id: totrans-222 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1088,17 +1438,22 @@ The model name that we get from the remote configuration object will depend on whether the user is included in the experiment. The following lines of code accomplish that:' + id: totrans-223 prefs: [] type: TYPE_NORMAL - en: '[PRE10]' + id: totrans-224 prefs: [] type: TYPE_PRE + zh: '[PRE10]' - en: The rest of the code to perform the prediction remains exactly the same as in the previous sections. Our app is now ready to use the correct model as dictated by our experiment. + id: totrans-225 prefs: [] type: TYPE_NORMAL - en: TensorFlow Lite on iOS + id: totrans-226 prefs: - PREF_H1 type: TYPE_NORMAL @@ -1113,18 +1468,22 @@ models, model A/B testing, and cloud fallback for processing. All of this without having to do much extra work. A developer writing a deep learning app for both iOS and Android might consider using ML Kit as a way to “build once, use everywhere.” + id: totrans-227 prefs: [] type: TYPE_NORMAL - en: Performance Optimizations + id: totrans-228 prefs: - PREF_H1 type: TYPE_NORMAL - en: In [Chapter 6](part0008.html#7K4G3-13fa565533764549a6f0ab7f11eed62b), we explored quantization and pruning, mostly from a theoretical standpoint. Let’s see them up close from TensorFlow Lite’s perspective and the tools to achieve them. + id: totrans-229 prefs: [] type: TYPE_NORMAL - en: Quantizing with TensorFlow Lite Converter + id: totrans-230 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1135,16 +1494,21 @@ input and output (which can be inspected using Netron, as shown in [Chapter 11](part0013.html#CCNA3-13fa565533764549a6f0ab7f11eed62b)). Going from 32-bit to 8-bit integer representation means a four times smaller model, with relatively little loss in accuracy. + id: totrans-231 prefs: [] type: TYPE_NORMAL - en: '[PRE11]' + id: totrans-232 prefs: [] type: TYPE_PRE + zh: '[PRE11]' - en: When it’s finished, this command should give us the `quantized-model.tflite` model. + id: totrans-233 prefs: [] type: TYPE_NORMAL - en: TensorFlow Model Optimization Toolkit + id: totrans-234 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1154,30 +1518,36 @@ in accuracy. Could we do better? *Quantization-aware training*, as the name suggests, accounts for the effects of quantization during training time and attempts to compensate and minimize the losses that would have happened in post-training quantization. + id: totrans-235 prefs: [] type: TYPE_NORMAL - en: 'Although both forms of quantization offer 75% reduction in the size of the model, experiments have shown the following:' + id: totrans-236 prefs: [] type: TYPE_NORMAL - en: In the case of MobileNetV2, compared to an eight-point loss in accuracy with post-training quantization, quantization-aware training yielded only a one-point loss. + id: totrans-237 prefs: - PREF_UL type: TYPE_NORMAL - en: For InceptionV3, quantization-aware training yielded a whopping 52% reduction in latency, compared to 25% reduction with post-training quantization. + id: totrans-238 prefs: - PREF_UL type: TYPE_NORMAL - en: Note + id: totrans-239 prefs: - PREF_H6 type: TYPE_NORMAL - en: It is worth noting that these accuracy metrics are on the 1,000 class ImageNet test set. Most problems have less complexity with a smaller number of classes. Post-training quantization should result in a smaller loss on such simpler problems. + id: totrans-240 prefs: [] type: TYPE_NORMAL - en: Quantization-aware training can be implemented with the TensorFlow Model Optimization @@ -1185,43 +1555,56 @@ compression, including pruning. Additionally, the TensorFlow Lite model repository already offers these prequantized models using this technique. [Table 13-2](part0015.html#effects_of_different_quantizatio-id00001) lists the effects of various quantization strategies. + id: totrans-241 prefs: [] type: TYPE_NORMAL - en: 'Table 13-2\. Effects of different quantization strategies (8-bit) on models (source: TensorFlow Lite Model optimization documentation)' + id: totrans-242 prefs: [] type: TYPE_NORMAL - en: '| Model | MobileNet | MobileNetV2 | InceptionV3 |' + id: totrans-243 prefs: [] type: TYPE_TB - en: '| --- | --- | --- | --- |' + id: totrans-244 prefs: [] type: TYPE_TB - en: '| **Top-1 accuracy** | **Original** | 0.709 | 0.719 | 0.78 |' + id: totrans-245 prefs: [] type: TYPE_TB - en: '| **Post-training quantized** | 0.657 | 0.637 | 0.772 |' + id: totrans-246 prefs: [] type: TYPE_TB - en: '| **Quantization-aware training** | 0.7 | 0.709 | 0.775 |' + id: totrans-247 prefs: [] type: TYPE_TB - en: '| **Latency (ms)** | **Original** | 124 | 89 | 1130 |' + id: totrans-248 prefs: [] type: TYPE_TB - en: '| **Post-training quantized** | 112 | 98 | 845 |' + id: totrans-249 prefs: [] type: TYPE_TB - en: '| **Quantization-aware training** | 64 | 54 | 543 |' + id: totrans-250 prefs: [] type: TYPE_TB - en: '| **Size (MB)** | **Original** | 16.9 | 14 | 95.7 |' + id: totrans-251 prefs: [] type: TYPE_TB - en: '| **Optimized** | 4.3 | 3.6 | 23.9 |' + id: totrans-252 prefs: [] type: TYPE_TB - en: Fritz + id: totrans-253 prefs: - PREF_H1 type: TYPE_NORMAL @@ -1238,28 +1621,35 @@ zone). Fritz, a Boston-based startup, is attempting to bring down these barriers and make the full cycle from model development to deployment even more straightforward for both data scientists and mobile developers. + id: totrans-254 prefs: [] type: TYPE_NORMAL - en: 'Fritz offers an end-to-end solution for mobile AI development that includes the following noteworthy features:' + id: totrans-255 prefs: [] type: TYPE_NORMAL - en: Ability to deploy models directly to user devices from Keras after training completes using callbacks. + id: totrans-256 prefs: - PREF_UL type: TYPE_NORMAL - en: 'Ability to benchmark a model directly from the computer without having to deploy it on a phone. The following code demonstrates this:' + id: totrans-257 prefs: - PREF_UL type: TYPE_NORMAL - en: '[PRE12]' + id: totrans-258 prefs: - PREF_IND type: TYPE_PRE + zh: '[PRE12]' - en: Ability to encrypt models so that our intellectual property can be protected from theft from a device by nefarious actors. + id: totrans-259 prefs: - PREF_UL type: TYPE_NORMAL @@ -1268,20 +1658,25 @@ of these algorithms, which run at high frame rates. For example, image segmentation, style transfer, object detection, and pose estimation. [Figure 13-24](part0015.html#performance_of_fritz_sdkapostrophes_obje) shows benchmarks of object detection run on various iOS devices. + id: totrans-260 prefs: - PREF_UL type: TYPE_NORMAL - en: '[PRE13]' + id: totrans-261 prefs: - PREF_IND type: TYPE_PRE + zh: '[PRE13]' - en: '![Performance of Fritz SDK’s object detection functionality on different mobile devices, relative to the iPhone X](../images/00163.jpeg)' + id: totrans-262 prefs: - PREF_IND type: TYPE_IMG - en: Figure 13-24\. Performance of Fritz SDK’s object detection functionality on different mobile devices, relative to the iPhone X + id: totrans-263 prefs: - PREF_IND - PREF_H6 @@ -1290,10 +1685,12 @@ their Jupyter notebooks. It’s worth noting that this can be a difficult problem (even for professional data scientists) that is simplified significantly because the developer has only to ensure that the data is in the right format. + id: totrans-264 prefs: - PREF_UL type: TYPE_NORMAL - en: Ability to manage all model versions from the command line. + id: totrans-265 prefs: - PREF_UL type: TYPE_NORMAL @@ -1301,15 +1698,18 @@ app called Heartbeat (also available on iOS/Android app stores). Assuming that we have a model ready without much mobile know how, we can clone the app, swap the existing model with our own, and get to see it run on the phone. + id: totrans-266 prefs: - PREF_UL type: TYPE_NORMAL - en: A vibrant community of contributors blogging about the latest in mobile AI on *heartbeat.fritz.ai*. + id: totrans-267 prefs: - PREF_UL type: TYPE_NORMAL - en: A Holistic Look at the Mobile AI App Development Cycle + id: totrans-268 prefs: - PREF_H1 type: TYPE_NORMAL @@ -1318,37 +1718,46 @@ Here, we tie everything together by exploring the kind of questions that come up throughout this life cycle. [Figure 13-25](part0015.html#mobile_ai_app_development_life_cycle) provides a broad overview of the development cycle. + id: totrans-269 prefs: [] type: TYPE_NORMAL - en: '![Mobile AI app development life cycle](../images/00162.jpeg)' + id: totrans-270 prefs: [] type: TYPE_IMG - en: Figure 13-25\. Mobile AI app development life cycle + id: totrans-271 prefs: - PREF_H6 type: TYPE_NORMAL - en: How Do I Collect Initial Data? + id: totrans-272 prefs: - PREF_H2 type: TYPE_NORMAL - en: 'We can apply a few different strategies to accomplish this:' + id: totrans-273 prefs: [] type: TYPE_NORMAL - en: Find the object of interest and manually take photos with different angles, lighting, environments, framing, and so on. + id: totrans-274 prefs: - PREF_UL type: TYPE_NORMAL - en: Scrape from the internet using browser extensions like Fatkun ([Chapter 12](part0014.html#DB7S3-13fa565533764549a6f0ab7f11eed62b)). + id: totrans-275 prefs: - PREF_UL type: TYPE_NORMAL - en: Find existing datasets ([Google Dataset Search](https://oreil.ly/NawZ6)). For example, Food-101 for dishes. + id: totrans-276 prefs: - PREF_UL type: TYPE_NORMAL - en: Synthesize your own dataset. + id: totrans-277 prefs: - PREF_UL type: TYPE_NORMAL @@ -1356,6 +1765,7 @@ a photograph of it. Replace the background with random images to synthesize a large dataset, while zooming, cropping, and rotating to create potentially hundreds of images. + id: totrans-278 prefs: - PREF_IND - PREF_OL @@ -1365,6 +1775,7 @@ to build a robust, diverse dataset. It’s important to note that, in points (a) and (b), there needs to be sufficient diversity in the foreground; otherwise, the network might overlearn that one example instead of understanding the object. + id: totrans-279 prefs: - PREF_IND - PREF_OL @@ -1376,11 +1787,13 @@ that work in this space. We use photo-realistic simulators to train models for the self-driving chapters ([Chapter 16](part0019.html#I3QM3-13fa565533764549a6f0ab7f11eed62b) and [Chapter 17](part0020.html#J2B83-13fa565533764549a6f0ab7f11eed62b)). + id: totrans-280 prefs: - PREF_IND - PREF_OL type: TYPE_NORMAL - en: How Do I Label My Data? + id: totrans-281 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1390,54 +1803,67 @@ yourself is not feasible, socially responsible labeling services such as Digital Data Divide, iMerit, and Samasource, which provide income opportunities to disadvantaged populations, might be a good option. + id: totrans-282 prefs: [] type: TYPE_NORMAL - en: How Do I Train My Model? + id: totrans-283 prefs: - PREF_H2 type: TYPE_NORMAL - en: 'The following are the two broad approaches to train a model:' + id: totrans-284 prefs: [] type: TYPE_NORMAL - en: 'With code: Use Keras and TensorFlow ([Chapter 3](part0005.html#4OIQ3-13fa565533764549a6f0ab7f11eed62b)).' + id: totrans-285 prefs: - PREF_UL type: TYPE_NORMAL - en: 'Without code: Use custom classifier services such as Google’s Auto ML, Microsoft’s CustomVision.ai, and Clarifai (benchmarked in [Chapter 8](part0010.html#9H5K3-13fa565533764549a6f0ab7f11eed62b)), or Apple’s ecosystem-only Create ML ([Chapter 12](part0014.html#DB7S3-13fa565533764549a6f0ab7f11eed62b)).' + id: totrans-286 prefs: - PREF_UL type: TYPE_NORMAL - en: How Do I Convert the Model to a Mobile-Friendly Format? + id: totrans-287 prefs: - PREF_H2 type: TYPE_NORMAL - en: 'The following are a few different ways to convert a model to a mobile-compatible format:' + id: totrans-288 prefs: [] type: TYPE_NORMAL - en: Use Core ML Tools (Apple only). + id: totrans-289 prefs: - PREF_UL type: TYPE_NORMAL - en: Use TensorFlow Lite Converter for iOS and Android. + id: totrans-290 prefs: - PREF_UL type: TYPE_NORMAL - en: Alternatively, use Fritz for the end-to-end pipeline. + id: totrans-291 prefs: - PREF_UL type: TYPE_NORMAL - en: How Do I Make my Model Performant? + id: totrans-292 prefs: - PREF_H2 type: TYPE_NORMAL - en: 'Here are some techniques to make a model performant:' + id: totrans-293 prefs: [] type: TYPE_NORMAL - en: Start with an efficient model such as the MobileNet family, or even better, EfficientNet. + id: totrans-294 prefs: - PREF_UL type: TYPE_NORMAL @@ -1445,15 +1871,18 @@ and inference times, while keeping the model accuracy relatively intact ([Chapter 6](part0008.html#7K4G3-13fa565533764549a6f0ab7f11eed62b), [Chapter 11](part0013.html#CCNA3-13fa565533764549a6f0ab7f11eed62b), and [Chapter 13](part0015.html#E9OE3-13fa565533764549a6f0ab7f11eed62b)). Expect up to a 75% reduction in size with little loss in accuracy. + id: totrans-295 prefs: - PREF_UL type: TYPE_NORMAL - en: Use Core ML Tools for the Apple ecosystem or TensorFlow Model Optimization Kit for both iOS and Android. + id: totrans-296 prefs: - PREF_UL type: TYPE_NORMAL - en: How Do I Build a Great UX for My Users? + id: totrans-297 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1462,6 +1891,7 @@ performance, and resource usage (memory, CPU, battery, etc.). And, of course, an intelligent feedback mechanism that enables effortless data collection and feedback helps. Gamifying this experience would take it to an entirely new level. + id: totrans-298 prefs: [] type: TYPE_NORMAL - en: In the case of the food classifier app, after the user takes a picture, our @@ -1475,50 +1905,62 @@ In the absolute worst case, in which none of the predictions were correct, the user should have a way to manually label the data. Providing an autosuggest feature during manual entry will help keep clean labels in the dataset. + id: totrans-299 prefs: [] type: TYPE_NORMAL - en: An even better experience could be one in which the user never needs to click a picture. Rather, the predictions are available in real time. + id: totrans-300 prefs: [] type: TYPE_NORMAL - en: How Do I Make the Model Available to My Users? + id: totrans-301 prefs: - PREF_H2 type: TYPE_NORMAL - en: 'The following are some ways to deploy models to users:' + id: totrans-302 prefs: [] type: TYPE_NORMAL - en: Bundle the model into the app binary and release it on the app stores. + id: totrans-303 prefs: - PREF_UL type: TYPE_NORMAL - en: Alternatively, host the model in the cloud and have the app download the model, as necessary. + id: totrans-304 prefs: - PREF_UL type: TYPE_NORMAL - en: Use a model management service like Fritz or Firebase (integrated with ML Kit). + id: totrans-305 prefs: - PREF_UL type: TYPE_NORMAL - en: How Do I Measure the Success of My Model? + id: totrans-306 prefs: - PREF_H2 type: TYPE_NORMAL - en: 'The very first step is to determine the success criteria. Consider the following examples:' + id: totrans-307 prefs: [] type: TYPE_NORMAL - en: “My model should run inferences in under 200 ms at the 90th percentile.” + id: totrans-308 prefs: - PREF_UL type: TYPE_NORMAL - en: “Users who use this model open the app every day.” + id: totrans-309 prefs: - PREF_UL type: TYPE_NORMAL - en: In the food classifier app, a success metric could be something along the lines of “80% of users selected the first prediction in the list of predictions.” + id: totrans-310 prefs: - PREF_UL type: TYPE_NORMAL @@ -1526,63 +1968,77 @@ time. It’s very essential to be data driven. When you have a new model version, run A/B tests on a subset of users and evaluate the success criteria against that version to determine whether it’s an improvement over the previous version. + id: totrans-311 prefs: [] type: TYPE_NORMAL - en: How Do I Improve My Model? + id: totrans-312 prefs: - PREF_H2 type: TYPE_NORMAL - en: 'Here are some ways to improve the quality of our models:' + id: totrans-313 prefs: [] type: TYPE_NORMAL - en: 'Collect feedback on individual predictions from users: what was correct, and, more important, what was incorrect. Feed these images along with the corresponding labels as input for the next model training cycle. [Figure 13-26](part0015.html#the_feedback_cycle_of_an_incorrect_predi) illustrates this.' + id: totrans-314 prefs: - PREF_UL type: TYPE_NORMAL - en: For users who have explicitly opted in, collect frames automatically whenever the prediction confidence is low. Manually label those frames and feed them into the next training cycle. + id: totrans-315 prefs: - PREF_UL type: TYPE_NORMAL - en: '![The feedback cycle of an incorrect prediction, generating more training data, leading to an improved model](../images/00073.jpeg)' + id: totrans-316 prefs: [] type: TYPE_IMG - en: Figure 13-26\. The feedback cycle of an incorrect prediction, generating more training data, leading to an improved model + id: totrans-317 prefs: - PREF_H6 type: TYPE_NORMAL - en: How Do I Update the Model on My Users’ Phones? + id: totrans-318 prefs: - PREF_H2 type: TYPE_NORMAL - en: 'Here are some ways to update the model on your users’ phones:' + id: totrans-319 prefs: [] type: TYPE_NORMAL - en: Bundle the new model into the next app release. This is slow and inflexible. + id: totrans-320 prefs: - PREF_UL type: TYPE_NORMAL - en: Host the new model on the cloud and force the apps out in the world to download the new model. Prefer to do this when the user is on WiFi. + id: totrans-321 prefs: - PREF_UL type: TYPE_NORMAL - en: Use a model management system such as Firebase (in conjunction with ML Kit) or Fritz to automate a lot of the grunt work involved. + id: totrans-322 prefs: - PREF_UL type: TYPE_NORMAL - en: With all of these questions answered, let’s appreciate the beauty of how this app improves on its own. + id: totrans-323 prefs: [] type: TYPE_NORMAL - en: The Self-Evolving Model + id: totrans-324 prefs: - PREF_H1 type: TYPE_NORMAL @@ -1590,17 +2046,21 @@ done. Just plug the model into an app and we’re ready to go. For custom-trained models that rely especially on scarce training data, we can get the user involved in building a self-improving, ever-evolving model. + id: totrans-325 prefs: [] type: TYPE_NORMAL - en: At the most fundamental level, each time a user uses the app, they’re providing the necessary feedback (image + label) to improve the model further, as illustrated in [Figure 13-27](part0015.html#the_self-evolving_model_cycle). + id: totrans-326 prefs: [] type: TYPE_NORMAL - en: '![The self-evolving model cycle](../images/00033.jpeg)' + id: totrans-327 prefs: [] type: TYPE_IMG - en: Figure 13-27\. The self-evolving model cycle + id: totrans-328 prefs: - PREF_H6 type: TYPE_NORMAL @@ -1608,17 +2068,21 @@ to step into the real world, a model needs to go through phases of development before it reaches its final users. The following is a popular model of phasing releases in software, which is also a useful guideline for releasing AI models. + id: totrans-329 prefs: [] type: TYPE_NORMAL - en: 1\. Dev + id: totrans-330 prefs: [] type: TYPE_NORMAL - en: This is the initial phase of development in which the developers of the app are the only users of it. Data accumulation is slow, the experience is very buggy, and model predictions can be quite unreliable. + id: totrans-331 prefs: [] type: TYPE_NORMAL - en: 2\. Alpha + id: totrans-332 prefs: [] type: TYPE_NORMAL - en: After the app is ready to be tested by a few users beyond the developers, it @@ -1627,9 +2091,11 @@ scale to the pipeline. The experience is not nearly as buggy, and the model predictions are a little more reliable here. In some organizations, this phase is also known as *dogfooding* (i.e., internal application testing by employees). + id: totrans-333 prefs: [] type: TYPE_NORMAL - en: 3\. Beta + id: totrans-334 prefs: [] type: TYPE_NORMAL - en: An app in beta has significantly more users than in alpha. User feedback is @@ -1639,9 +2105,11 @@ are a lot more reliable, thanks to the alpha users. Beta apps tend to be hosted on Apple’s TestFlight for iOS and Google Play Console for Android. Third-party services such as HockeyApp and TestFairy are also popular for hosting beta programs. + id: totrans-335 prefs: [] type: TYPE_NORMAL - en: 4\. Prod + id: totrans-336 prefs: [] type: TYPE_NORMAL - en: An app in production is stable and is widely available. Although the data is @@ -1650,27 +2118,33 @@ better at edge cases it might not have been seen before in the first three phases. When the model is mature enough, small version improvements can be alpha/beta tested or A/B tested on subsets of the production audience. + id: totrans-337 prefs: [] type: TYPE_NORMAL - en: Although many data scientists assume the data to be available before starting development, people in the mobile AI world might need to bootstrap on a small dataset and improve their system incrementally. + id: totrans-338 prefs: [] type: TYPE_NORMAL - en: With this self-evolving system for the Shazam for Food app, the most difficult choice that the developers must make should be the restaurants they visit to collect their seed data. + id: totrans-339 prefs: [] type: TYPE_NORMAL - en: Case Studies + id: totrans-340 prefs: - PREF_H1 type: TYPE_NORMAL - en: Let’s look at a few interesting examples that show how what we have learned so far is applied in the industry. + id: totrans-341 prefs: [] type: TYPE_NORMAL - en: Lose It! + id: totrans-342 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1681,6 +2155,7 @@ they eat. Users can point their phone’s camera at barcodes, nutritional labels, and the food that they’re just about to eat to track calories and macros they’re consuming for every single meal. + id: totrans-343 prefs: [] type: TYPE_NORMAL - en: The company first implemented its food scanning system with a cloud-based algorithm. @@ -1689,18 +2164,22 @@ Lite to optimize them for mobile and used ML Kit to seamlessly deploy to its millions of users. With a constant feedback loop, model updates, and A/B testing, the app keeps improving essentially on its own. + id: totrans-344 prefs: [] type: TYPE_NORMAL - en: '![Snap It feature from Lose It! showing multiple suggestions for a scanned food item](../images/00321.jpeg)' + id: totrans-345 prefs: [] type: TYPE_IMG - en: Figure 13-28\. Snap It feature from Lose It! showing multiple suggestions for a scanned food item + id: totrans-346 prefs: - PREF_H6 type: TYPE_NORMAL - en: Portrait Mode on Pixel 3 Phones + id: totrans-347 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1710,6 +2189,7 @@ name.) The closer the camera is located to the subject, the more the background appears blurred. With the help of professional lenses with a low *f*-stop number (which often run into the thousands of dollars), one can produce spectacular blurs. + id: totrans-348 prefs: [] type: TYPE_NORMAL - en: But if you don’t have that kind of money, AI can help. Google’s Pixel 3 offers @@ -1717,11 +2197,13 @@ estimate the depth of each pixel in a scene, and determine which pixels belong to the foreground and the background. The background pixels are then blurred to varying intensities to create the bokeh effect, as demonstrated in [Figure 13-29](part0015.html#portrait_effect_on_pixel_3_which_achieve). + id: totrans-349 prefs: [] type: TYPE_NORMAL - en: Depth estimation is a compute-intensive task so running it fast is essential. Tensorflow Lite with a GPU backend comes to the rescue, with a roughly 3.5 times speedup over a CPU backend. + id: totrans-350 prefs: [] type: TYPE_NORMAL - en: Google accomplished this by training a neural network that specializes in depth @@ -1730,18 +2212,22 @@ to each camera. It then used the *parallax effect* experienced by each pixel to precisely determine the depth of each pixel. This depth map was then fed into the CNN along with the images. + id: totrans-351 prefs: [] type: TYPE_NORMAL - en: '![Portrait effect on Pixel 3, which achieves separation between foreground and background using blurring](../images/00003.jpeg)' + id: totrans-352 prefs: [] type: TYPE_IMG - en: Figure 13-29\. Portrait effect on Pixel 3, which achieves separation between foreground and background using blurring + id: totrans-353 prefs: - PREF_H6 type: TYPE_NORMAL - en: Speaker Recognition by Alibaba + id: totrans-354 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1750,6 +2236,7 @@ be terrifying if this worked. Obviously, most mobile operating systems today ask us to unlock the phone first before proceeding. This creates a not-so-smooth experience for the owner of the phone. + id: totrans-355 prefs: [] type: TYPE_NORMAL - en: 'Alibaba Machine Intelligence Labs solved this problem using the following approach. @@ -1757,6 +2244,7 @@ with mel-frequency cepstrum algorithm), training a CNN (using transfer learning), and finally deploying it to devices using Tensorflow Lite. That’s right: CNNs can do more than just computer vision!' + id: totrans-356 prefs: [] type: TYPE_NORMAL - en: By recognizing speakers, they are able to personalize the content on devices @@ -1766,9 +2254,11 @@ processing faster with Tensorflow Lite, engineers kept the `USE_NEON` flag to accelerate instruction sets for ARM-based CPUs. The team reported a speed increase of four times with this optimization. + id: totrans-357 prefs: [] type: TYPE_NORMAL - en: Face Contours in ML Kit + id: totrans-358 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1778,6 +2268,7 @@ in the input image. In total, 133 points map to the various facial contours, including 16 points representing each eye, while 36 points map to the oval shape around the face, as demonstrated in [Figure 13-30](part0015.html#onethreethree_face_contour_points_identi). + id: totrans-359 prefs: [] type: TYPE_NORMAL - en: Internally, ML Kit is running a deep learning model using TensorFlow Lite. In @@ -1785,16 +2276,20 @@ in speed of four times on Pixel 3 and Samsung Galaxy S9, and a six-times speedup on iPhone 7, compared to the previous CPU backend. The effect of this is that we can precisely place a hat or sunglasses on a face in real time. + id: totrans-360 prefs: [] type: TYPE_NORMAL - en: '![133 Face contour points identified by ML Kit (image source)](../images/00119.jpeg)' + id: totrans-361 prefs: [] type: TYPE_IMG - en: Figure 13-30\. 133 Face contour points identified by ML Kit ([image source](https://oreil.ly/8PMGa)) + id: totrans-362 prefs: - PREF_H6 type: TYPE_NORMAL - en: Real-Time Video Segmentation in YouTube Stories + id: totrans-363 prefs: - PREF_H2 type: TYPE_NORMAL @@ -1805,6 +2300,7 @@ needs but might not have the budget. And now they have a solution within the YouTube Stories app—a real-time video segmentation option implemented over TensorFlow Lite. + id: totrans-364 prefs: [] type: TYPE_NORMAL - en: The key requirements here are first, to run semantic segmentation fast (more @@ -1816,16 +2312,19 @@ Because we traditionally work on three channels (RGB), the trick is to add a fourth channel, which essentially is the output of the previous frame, as illustrated in [Figure 13-31](part0015.html#an_input_image_left_parenthesisleftright). + id: totrans-365 prefs: [] type: TYPE_NORMAL - en: '![An input image (left) is broken down into its three components layers (R, G, B). The output mask of the previous frame is then appended with these components (image source)](../images/00202.jpeg)' + id: totrans-366 prefs: [] type: TYPE_IMG - en: Figure 13-31\. An input image (left) is broken down into its three components layers (R, G, B). The output mask of the previous frame is then appended with these components ([image source](https://oreil.ly/rHNH5)) + id: totrans-367 prefs: - PREF_H6 type: TYPE_NORMAL @@ -1836,9 +2335,11 @@ observed when choosing the GPU backend over the CPU backend (much more acceleration compared to the usual two to seven times for other semantic segmentation tasks for images only). + id: totrans-368 prefs: [] type: TYPE_NORMAL - en: Summary + id: totrans-369 prefs: - PREF_H1 type: TYPE_NORMAL @@ -1849,9 +2350,11 @@ convert our TensorFlow models to TensorFlow Lite format so that they could be used within an Android device. We finally discussed a couple of examples of how TensorFlow Lite is being used to solve real-world problems. + id: totrans-370 prefs: [] type: TYPE_NORMAL - en: In the next chapter, we look at how we can use real-time deep learning to develop an interactive, real-world application. + id: totrans-371 prefs: [] type: TYPE_NORMAL