From c646eac7b8905e572a8ebfa1fa216e5b38b7a064 Mon Sep 17 00:00:00 2001 From: Cilla-luodan <85477033+Cilla-luodan@users.noreply.github.com> Date: Wed, 24 Jan 2024 16:49:24 +0800 Subject: [PATCH 1/3] New translations --- en-US/dita/RTC-NG/release/release_notes.dita | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en-US/dita/RTC-NG/release/release_notes.dita b/en-US/dita/RTC-NG/release/release_notes.dita index cc683ecec07..72a951979d7 100644 --- a/en-US/dita/RTC-NG/release/release_notes.dita +++ b/en-US/dita/RTC-NG/release/release_notes.dita @@ -221,7 +221,7 @@

-
  • Query Device Score

    This release adds the method to query the device's score level to ensure that the user-set parameters do not exceed the device's capabilities. For example, in HD or UHD video scenarios, you can first call this method to query the device's score. If the returned score is low (for example, below 60), you need to lower the video resolution to avoid affecting the video experience. The minimum device score required for different business scenarios is varied. For specific score recommendations, please .

    +
  • Query Device Score

    This release adds the method to query the device's score level to ensure that the user-set parameters do not exceed the device's capabilities. For example, in HD or UHD video scenarios, you can first call this method to query the device's score. If the returned score is low (for example, below 60), you need to lower the video resolution to avoid affecting the video experience. The minimum device score required for different business scenarios is varied. For specific score recommendations, please contact .

  • Select different audio tracks for local playback and streaming

    This release introduces the method that allows you to select different audio tracks for local playback and streaming to remote users. For example, in scenarios like online karaoke, the host can choose to play the original sound locally and publish the accompaniment in the channel. Before using this function, you need to open the media file through the method and enable this function by setting the enableMultiAudioTrack parameter in .

  • From 4080f953cdd53d0b645175b193da3f62dc7e9e97 Mon Sep 17 00:00:00 2001 From: Cilla-luodan <85477033+Cilla-luodan@users.noreply.github.com> Date: Wed, 31 Jan 2024 16:56:48 +0800 Subject: [PATCH 2/3] New translations From 68a2924e06418cc2e52b51fa3e6269797a83e977 Mon Sep 17 00:00:00 2001 From: Cilla-luodan <85477033+Cilla-luodan@users.noreply.github.com> Date: Thu, 11 Apr 2024 14:20:59 +0800 Subject: [PATCH 3/3] New translations --- en-US/dita/RTC-NG/release/release_notes.dita | 527 +++++-------------- 1 file changed, 142 insertions(+), 385 deletions(-) diff --git a/en-US/dita/RTC-NG/release/release_notes.dita b/en-US/dita/RTC-NG/release/release_notes.dita index 72a951979d7..0da2c48feb8 100644 --- a/en-US/dita/RTC-NG/release/release_notes.dita +++ b/en-US/dita/RTC-NG/release/release_notes.dita @@ -1,246 +1,75 @@ - v4.3.0 + v4.3.1

    v4.3.0 was released on xx xx, 2024.

    - <topic id="compatibility"> + <topic id="compatibility" props="android"> <title>Compatibility changes -

    This release has optimized the implementation of some functions, involving renaming or deletion of some APIs. To ensure the normal operation of the project, you need to update the code in the app after upgrading to this release.

    -
      -
    1. Raw video data callback behaviour change (Android)

      As of this release, the callback processing related to raw video data changes from the previous fixed single thread to a random thread, meaning that callback processing can occur on different threads. Due to limitations in the Android system, OpenGL must be tightly bound to a specific thread. Therefore, Agora suggests that you make one of the following modifications to your code:

        -
      • (Recommended) Use the TextureBufferHelper class to create a dedicated OpenGL thread for video pre-processing or post-processing (for example, image enhancement, stickers, etc.).
      • -
      • Use the eglMakeCurrent method to associate the necessary OpenGL context for each video frame with the current thread.
      • -

      -
    2. -
    3. Renaming parameters in callbacks

      In order to make the parameters in some callbacks and the naming of enumerations in enumeration classes easier to understand, the following modifications have been made in this release. Please modify the parameter settings in the callbacks after upgrading to this release. - - - - - - - Callback - Original parameter name - Existing parameter name - - - - - - error - reason - - - - error - reason - - - - error - reason - - - - errorCode - reason - - - - -
        -
      • status
      • -
      • state
      • -
      -
      - -
        -
      • errorCode
      • -
      • reason
      • -
      -
      -
      - - - errorCode - reason - - - - errorCode - reason - - - - errorCode - reason - - - - error - reason - - - - ec - reason - - - - errCode - reason - - - -
      - - - - - - Original enumeration class - Current enumeration class - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      - For specific renaming of enumerations, please refer to .

      -
    4. -
    5. Channel media relay

      To improve interface usability, this release removes some methods and callbacks for channel meda relay. Use the alternative options listed in the table below: - - - - - - Deleted methods and callbacks - Alternative methods and callbacks - - - - - -
        -
      • -
      • -
      -
      - -
      - - -
        -
      • -
      • -
      -
      - -
      - - - - - - -

      -
    6. -
    7. Custom video source

      Since this release, pushExternalVideoFrameEx[1/2] and pushExternalVideoFrameEx[2/2] are renamed to and , and are migrated from to .

      -

      Since this release, pushExternalVideoFrame[1/2] is migrated from to .

      -
    8. -
    9. Audio route

      Since this release, RouteBluetooth is renamed to , representing a Bluetooth device using the HFP protocol. The (10) is added to represent a Bluetooth device using the A2DP protocol.

      -

      Since this release, ROUTE_BLUETOOTH in is renamed to , representing a Bluetooth device using the HFP protocol. (10) is added to represent a Bluetooth device using the A2DP protocol.

      -

      Starting with this release, AgoraAudioOutputRoutingBluetooth in is renamed to , representing a Bluetooth device using the HFP protocol. (10) is added to represent the audio route to a Bluetooth device using the A2DP protocol.

      -

      Starting with this release, routeBluetooth in is renamed to , representing a Bluetooth device using the HFP protocol. (10) is added to represent a Bluetooth device using the A2DP protocol.

      -

      Starting with this release, RouteBluetooth in is renamed to , representing a Bluetooth device using the HFP protocol. (10) is added to represent a Bluetooth device using the A2DP protocol.

      -
    10. -
    11. The state of the remote video

      To make the name of the enumeration easier to understand, this release changes the name of the enumeration from REMOTE_VIDEO_STATE_PLAYING to , while the meaning of the enumeration remains unchanged.

      -
    12. -
    13. Reasons for local video state changes

      This release makes the following modifications to the enumerations in the class:This release makes the following modifications:

        -
      • The value of (formerly ) has been changed from 23 to 28. (Windows)
      • -
      • The value of (formerly ) has been changed from 24 to 29. (Windows)
      • -
      • The enumeration has been changed to .
      • -

      -
    14. -
    15. Audio loopback capturing (Windows, macOS)

      -

        -
      • Before v4.3.0, if you call the method to disable the audio module, audio loopback capturing will not be disabled.
      • -
      • As of v4.3.0, if you call the method to disable the audio module, audio loopback capturing will be disabled as well. If you need to enable audio loopback capturing, you need to enable the audio module by calling the method and then call .
      • -
      -

      -
    16. -
    +

    为确保参数命名的一致性,该版本该版本将 中的 channelName 改为 channelId,将 optionalUid 改为 uid。 为确保项目的正常运行,你需要在升级到该版本后更新 App 中的代码。

    New features
      -
    1. Custom mixed video layout on receiving end (Android, iOS)

      To facilitate customized layout of mixed video stream at the receiver end, this release introduces the callback. When the receiver receives the channel's mixed video stream sent by the video mixing server, this callback is triggered, reporting the layout information of the mixed video stream and the layout information of each sub-video stream in the mixed stream. The receiver can set a separate view for rendering the sub-video stream (distinguished by subviewUid) in the mixed video stream when calling the method, achieving a custom video layout effect.

      -

      When the layout of the sub-video streams in the mixed video stream changes, this callback will also be triggered to report the latest layout information in real time.

      -

      Through this feature, the receiver end can flexibly adjust the local view layout. When applied in a multi-person video scenario, the receiving end only needs to receive and decode a mixed video stream, which can effectively reduce the CPU usage and network bandwidth when decoding multiple video streams on the receiving end.

      - For best practices regarding this feature in multi-host scenarios, see .
    2. -
    3. Local preview with multiple views

      This release supports local preview with simultaneous display of multiple frames, where the videos shown in the frames are positioned at different observation positions along the video link. Examples of usage are as follows:

        -
      1. Call to set the first view: Set the position parameter to (introduced in this release) in . This corresponds to the position after local video capture and before preprocessing. The video observed here does not have preprocessing effects.
      2. -
      3. Call to set the second view: Set the position parameter to in , the video observed here has the effect of video preprocessing.
      4. -
      5. Observe the local preview effect: The first view is the original video of a real person; the second view is the virtual portrait after video preprocessing (including image enhancement, virtual background, and local preview of watermarks) effects.
      6. -
      -
        -
      1. Create the first view: Set the position parameter to (introduced in this release) in . This corresponds to the position after local video capture and before preprocessing. The video observed here does not have preprocessing effects.
      2. -
      3. Create the second view: Set the position parameter to in , the video observed here has the effect of video preprocessing.
      4. -
      5. Observe the local preview effect: The first view is the original video of a real person; the second view is the virtual portrait after video preprocessing (including image enhancement, virtual background, and local preview of watermarks) effects.
      6. -
      -

      -
    4. -
    5. Query Device Score

      This release adds the method to query the device's score level to ensure that the user-set parameters do not exceed the device's capabilities. For example, in HD or UHD video scenarios, you can first call this method to query the device's score. If the returned score is low (for example, below 60), you need to lower the video resolution to avoid affecting the video experience. The minimum device score required for different business scenarios is varied. For specific score recommendations, please contact .

      -
    6. -
    7. Select different audio tracks for local playback and streaming

      This release introduces the method that allows you to select different audio tracks for local playback and streaming to remote users. For example, in scenarios like online karaoke, the host can choose to play the original sound locally and publish the accompaniment in the channel. Before using this function, you need to open the media file through the method and enable this function by setting the enableMultiAudioTrack parameter in .

      -
    8. -
    9. Audio playback device test

      This release introduces the method to allow you to test whether you local audio device for playback works properly. You can specify the audio file to be played through the testAudioFilePath parameter and see if your audio device works properly. After the test is completed, you need to call the newly added method to stop the test.

      -
    10. -
    11. Device test for audio capturing and playback

      This release introduces the following methods to test whether the audio capturing or playback devices work properly before joining a channel:

        -
      • : Tests whether the local audio capturing device, such as the speaker, is working properly. After calling this method, the SDK triggers a callback at the time interval set in this method, which reports uid = 0 and the volume information of the capturing device. After the test is completed, you need to call the newly added method to stop the test.
      • -
      • : Tests whether the local audio playback device is working properly. You can specify the audio file to be played through the testAudioFilePath parameter and see if your audio device works properly. After the test is completed, you need to call the newly added method to stop the test.
      • +
      • 语音驱动虚拟人

        SDK 新增语音驱动插件,可以将语音信息转换为对应口型的人脸信息,用于驱动虚拟人面部做出语音对应的口型变化。 你可以通过该版本新增的 方法和 回调获取语音驱动插件处理后的人脸信息。 该人脸信息为符合 ARkit 标准的 BS(Blend Shape)数据,你可以使用第三方 3D 渲染引擎对 BS 数据进一步处理。

        +

        语音驱动插件为可裁剪动态库,集成后 App 增加的体积等信息详见

        + +
          +
        • 声网 SDK 拓展插件 MetaKit 可帮助你简化语音驱动虚拟人的实现流程,无需自行搭建采集、编码和传输的框架。 有关 MetaKit 的介绍和集成指导,详见
        • +
        • 语音驱动虚拟人功能处于公测阶段,如需使用请
        • +
      • +
      • 隐私清单文件 (iOS)

        为满足 Apple 对于 App 发布的安全合规要求,SDK 自该版本起新增隐私清单文件 PrivacyInfo.xcprivacy,其中包含 SDK 中需要访问或使用用户数据的 API 调用说明和 SDK 采集的数据类型说明。

        + 如果你需要将集成 v4.3.1 之前的 SDK 版本的 App 发布到苹果应用商店,则需要在 Xcode 工程中手动添加 PrivacyInfo.xcprivacy 文件。 For details, see
      • +
      • 人像锁定 (iOS, macOS)

        为提升在线会议、秀场、在线教育等场景中的主播演讲效果,该版本新增 方法开启人像锁定功能。 该功能可确保主播无论移动与否,始终位于画面中心,以取得良好的演讲效果。

        +

        在开启人像锁定前,建议你先调用 查询当前设备性能是否支持该功能。 支持的设备清单可参考 API 文档

        +
      • +
      • 摄像头防抖 (iOS)

        为改善在移动拍摄、暗光环境拍摄、手持拍摄等场景下的视频画面稳定性,该版本新增摄像头防抖功能。 你可以调用 开启该功能并设置适合的防抖模式,以获得更加稳定、清晰的视频画面。

        +
      • +
      • Improved camera capture effect (Android, iOS)

        为了让用户捕捉到更宽阔的视野和更完整的画面内容,该版本新增了对广角和超广角摄像头的支持。 你可以先调用 查询设备支持的焦距能力,然后调用 并将 cameraFocalLengthType 设置为支持的焦距类型,包括广角、超广角等。

        +
      • +
      • 摄像头多路采集 (Android)

        该版本针对 Android 摄像头采集新增以下功能:

          +
        1. 支持第三个和第四个摄像头视频流的采集和发布。
            +
          • 中新增 (11) 和 (12) 枚举,分别用于指定视频源为第三个和第四个摄像头。 此项改动允许你在调用 开启摄像头采集时,指定最多 4 路摄像头采集的视频流。
          • +
          • 中的 (11) 和 (12) 枚举新增对 Android 的支持,分别用于指定视频源为第三个和第四个摄像头。 此项改动允许你在调用 开启摄像头采集时,指定最多 4 路摄像头采集的视频流。
          • +
          • 中新增 publishThirdCameraTrackpublishFourthCameraTrack 成员。 调用 加入频道时,将这两个成员参数设置为 即可在频道内发布第三个和第四个摄像头采集的视频流。
          • +
          • 中的 publishThirdCameraTrackpublishFourthCameraTrack 成员新增对 Android 的支持。 调用 加入频道时,将这两个成员参数设置为 即可在频道内发布第三个和第四个摄像头采集的视频流。
          • +
        2. +
        3. 支持通过摄像头 ID 指定摄像头。
            +
          • 中新增 cameraId 成员。 对于多摄像头的设备,cameraDirection 无法识别或访问全部可用摄像头。 这种情况下,你可以通过 Android 原生系统 API 获取摄像头 ID,再调用 并传入 cameraId 指定所需的摄像头。
          • +
          • 新增 ,支持通过 cameraId 切换摄像头,允许 App 在运行期间基于可用摄像头的实际情况动态调整摄像头使用。
          • +
          +

          中新增 cameraId 成员。 对于多摄像头的设备,cameraDirection 无法识别或访问全部可用摄像头。 这种情况下,你通过系统 API 获取摄像头 ID,再调用 并传入 cameraId 指定摄像头。

          +
        4. +

        +
      • +
      • 数据流加密

        该版本在 中新增 datastreamEncryptionEnabled 成员用于启用数据流加密。 你可以在调用 开启加密时来进行设置。 如果在数据流加密或解密的过程中遇到问题导致失败,你可以通过新增的 枚举得知。

        +
      • +
      • 本地视频渲染

        该版本在 中新增如下成员,以支持更多本地渲染功能。

          +
        • surfaceTexture:设置一个 Android 原生的 SurfaceTexture 对象作为提供视频图像的容器,然后使用 SDK 外部的方法自行实现 OpenGL 中的纹理绘制。
        • +
        • enableAlphaMask:可以通过该成员设置接收端是否开启 alpha 遮罩渲染。 Alpha mask rendering can create images with transparent effects and extract portraits from videos.

      • -
      • Others

        This release has passed the test verification of the following APIs and can be applied to the entire series of RTC 4.x SDK.

          -
        • : Sets fallback option for the subscribed video stream in weak network conditions.
        • -
        • : Occurs when the subscribed video stream falls back to audio-only stream due to weak network conditions or switches back to the video stream after the network conditions improve.
        • -
        • (Windows): Sets the volume of the audio playback device.
        • -
        • (Windows): Sets the volume of the audio capturing device.
        • -
        • and : Sets media player options for providing technical previews or special customization features.
        • -
        • : Sets whether to enable the local playback of external audio source.
        • +
        • 视频小流自适应配置

          该版本新增视频小流的自适应配置功能,当你调用 在发送端设置双流模式并设置视频小流时,SDK 的默认行为如下所示:

            +
          • 将视频小流的默认编码分辨率设为原始视频编码分辨率的 50%。
          • +
          • 视频小流的码率根据视频分辨率和帧率自动匹配,无需指定。

        • +
        • +

          +

        • +
        • Others

          +

            +
          • 新增 方法,用于多频道场景下开启媒体流或数据流加密。
          • +
          • 新增 方法,用于设置音乐文件的播放速度。
          • +
          • 新增 方法,用于多频道场景下获取通话 ID。
          • +

          +
    @@ -248,49 +77,40 @@ Improvements
      -
    1. SDK task processing scheduling optimization

      This release optimizes the scheduling mechanism for internal tasks within the SDK, with improvements in the following aspects:

        -
      • There is a noticeable increase in the speed of video rendering and audio playback for both remote and local first frames.
      • -
      • The time taken for API calls and response is reduced by more than 50%.
      • -
      • The SDK's parallel processing capability significantly improves, delivering higher video quality (720P, 24 fps) even on lower-end devices. Additionally, image processing remains more stable in scenarios involving high resolutions and frame rates.
      • -
      • The stability of the SDK is further enhanced, leading to a noticeable decrease in the crash rate across various specific scenarios.
      • -

      +
    2. 屏幕共享游戏场景优化 (Windows)

      该版本对屏幕共享的游戏场景进行了专项优化,在超高清(4K, 60 fps)游戏场景下提升了性能表现,增强了窗口采集的稳定性,为游戏玩家带来更清晰、流畅、稳定的游戏体验。

    3. -
    4. In-ear monitoring volume boost

      This release provides users with more flexible in-ear monitoring audio adjustment options, supporting the ability to set the in-ear monitoring volume to four times the original volume by calling .

      +
    5. +

      Local audio state changed callback optimization

      +

      为方便你了解本地视频状态改变的具体原因,该版本在 回调的 枚举类中新增如下枚举:

      +

      This release introduces the following enumerations in , enabling users to obtain more details about local audio errors through the callback:

      +
        +
      • (14):由于摄像头被其他 App 占用或 App 切后台等原因导致的视频采集中断。
      • +
      • (15):视频采集设备出错,可能由于摄像头设备故障导致。
      • +
      • (27): The window for screen capture has been restored from the minimized state.
      • +
    6. +
    7. 摄像头采集改进

      优化了摄像头采集的视频处理机制,减少噪点、提高亮度、改善色彩,使采集到的图形更加清晰、明亮和真实。

    8. -
    9. Dual-stream mode optimization

      This release optimizes the video dual-stream mode. In the low-quality video stream adaptive mode (), when the receiver requests a low-quality video stream by calling or , the sender no longer restricts the identity of the request source to be the broadcaster. This adjustment ensures that the audience‘s request for a low-quality video stream takes effect, improving the flexibility and adaptability of the low-quality video stream.

      +
    10. 获取音频设备类型

      该版本在 中新增 deviceTypeName 成员,用于获取音频设备的类型,如 built-in、USB、HDMI 等。

    11. -
    12. Spatial audio effects usability improvement
        -
      • This release optimizes the design of the method, supporting the ability to set the zones parameter to NULL, indicating the clearing of all echo cancellation zones.
      • -
      • As of this release, it is no longer necessary to unsubscribe from the audio streams of all remote users within the channel before calling the method.
      • -
      • This release introduces the method, designed to pass position vectors for direct rendering in iOS native frameworks such as SceneKit or RealityKit.
      • -
    13. -
    14. Local audio state changed callback optimization

      This release introduces the following enumerations in , enabling users to obtain more details about local audio errors through the callback:

        -
      • : No recording device is available. Remind your users to check whether the microphone is connected to the device properly in the control plane of the device or if the microphone is working properly.
      • -
      • : No playback device is available. Remind your users to check whether the speaker is connected to the device properly in the control plane of the device or if the speaker is working properly.
      • -
      • : The local audio capture is interrupted by system calls, Siri, or alarm clocks. Remind your users to end the phone call, Siri, or alarm clock if the local audio capture is required.
      • +
      • Virtual Background Algorithm Upgrade

        为了提高纯色背景下开启虚拟背景时人像分割的精确度和稳定性,该版本优化了绿幕分割算法:

          +
        • 支持识别任意纯色背景,不再局限于绿幕。
        • +
        • 对背景颜色的识别更加准确,并且在分割人像时露出的背景更少。
        • +
        • 分割后的人像边缘(尤其是手指边缘)更加稳定,显著减少边缘的画面闪动现象。

      • -
      • Optimization of video pre-processing methods

        This release adds overloaded methods with the souceType parameter for the following 5 video preprocessing methods, which support specifying the media source type for applying video preprocessing effects by passing in sourceType (for example, applying on a custom video capture media source):

        -
          -
        • -
        • -
        • -
        • -
        • -
        +
      • Custom audio source

        为提升音频自采集的灵活性,该版本废弃 并新增 。 相比废弃方法,新增方法中增加了 sampleRatechannelstrackId 参数,支持你通过自定义音频轨道将外部 CMSampleBuffer 音频数据推送到频道内,并支持设置外部音频源的采样率和声道数。

        +
      • +
      • 降低耳返的 CPU 消耗

        该版本新增 (1 <<15) 枚举。 针对复杂的音效处理场景,你可以指定该选项,复用发送端音效处理后的 Audio filter,从而降低 CPU 的消耗。 需要注意的是该选项会导致耳返延迟增加,适用于对耳返延迟不敏感的使用场景。

      • Other Improvements

        This release also includes the following improvements:

          -
        • Optimizes algorithms in the video module, reduces bandwidth requirements during video transmission, and enhances the video experience for users while maintaining network stability.
        • -
        • Adds codecType in to set the video encoding type.
        • -
        • Adds allowCaptureCurrentApp member in , which is used to set whether to capture audio from the current app during screen sharing. The default value of this member is , which means it collects the audio from the current app by default. In certain scenarios, the shared screen audio captured by the app may cause echo on the remote side due to signal delay and other reasons. Agora suggests setting this member as to eliminate the remote echo introduced during the screen sharing process.
        • -
        • The callback is improved with the inclusion of the enumeration, singaling unexpected errors during the screen sharing process (potentially due to window blocking failure), resulting in performance degradation without impacting the screen sharing process itself. (Windows)
        • -
        • This release optimizes the SDK's domain name resolution strategy, improving the stability of calling to resolve domain names in complex network environments.
        • -
        • When passing in an image with transparent background as the virtual background image, the transparent background can be filled with customized color.
        • -
        • This release adds the earMonitorDelay and aecEstimatedDelay members in to report ear monitor delay and acoustic echo cancellation (AEC) delay, respectively.
        • -
        • When using the sound card for recording, it supports capturing audio data in stereo.
        • -
        • The callback is added to reports the statistics of the media file being cached. This callback is triggered once per second after file caching is started.
        • -
        • The callback is added to reports the statistics of the media file being played. This callback is triggered once per second after the media file starts playing. You can obtain information like the audio and video bitrate of the media file through .
        • -
        • An error code is added to suggest that an internal error in HTTP. You can try again later. (Android, iOS)
        • +
        • 优化了非屏幕共享场景的视频编解码策略,节省系统性能开销。 (iOS, macOS, Windows)
        • +
        • 针对 macOS 14 及以上系统,优化了 的行为。 自该版本起,调用该方法返回的可用窗口资源中会自动过滤小组件窗口。 (macOS)
        • +
        • 提升了本地合图功能的性能和稳定性,并降低了该功能对 CPU 的占用率。 (Android)
        • +
        • 提升了原始视频帧观测器在处理视频时的稳定性。 (iOS)
        • +
        • 优化了媒体播放器对 WebM 格式视频的处理能力,支持渲染 alpha 通道。
        • +
        • 中新增 (合唱音效),可使人声在合唱场景下更具空间立体感。
        • +
        • 新增 (合唱音效),可使人声在合唱场景下更具空间立体感。
        • +
        • 中新增 e2eDelay,用于报告从发送端采集音频起,到接收端开始播放音频时的延时。

    @@ -301,10 +121,18 @@

    This release fixed the following issues:

    @@ -312,130 +140,59 @@ API changes

    Added

    -

    Modified