From 82574f4541389539d92b7d18eadcc44d74c15557 Mon Sep 17 00:00:00 2001 From: Suri539 Date: Mon, 25 Nov 2024 16:52:49 +0800 Subject: [PATCH 1/6] 4.5.0 unity release --- .../en-US/framework/release_unity_ng.md | 128 +++++++++++++++++- 1 file changed, 124 insertions(+), 4 deletions(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md index fe8fee87ef3..2602ef4d8b6 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md @@ -1,13 +1,133 @@ ## Known issues and limitations -**Android 14 screen sharing issue (Android)** - -On Android 14 devices (such as OnePlus 11), screen sharing may not be available when `targetSdkVersion` is set to 34. For example, half of the shared screen may be black. To avoid this issue, Agora recommends setting `targetSdkVersion` to 34 or below. However, this may cause the screen sharing process to be interrupted when switching between portrait and landscape mode. In this case, a window will pop up on the device asking if you want to start recording the screen. After confirming, you can resume screen sharing. - **AirPods Pro Bluetooth connection issue (iOS)** AirPods Pro does not support A2DP protocol in communication audio mode, which may lead to connection failure in that mode. +## v4.5.0 + +This version was released on November x, 2024. + +#### Compatibility changes + +This version includes optimizations to some features, including changes to SDK behavior, API renaming and deletion. To ensure normal operation of the project, update the code in the app after upgrading to this release. + +1. **Member Parameter Type Changes** + + This version has made the following modifications to some API members or parameters: + + | API | Members/Parameters | Change | + | ------------------------------ | ---------------------------------------------------------- | ---------------------------- | + | `StartScreenCaptureByWindowId` | **windowId** | Changed from view_t to long | + | `ScreenCaptureConfiguration` | | Changed from unit to long | + | `ScreenCaptureSourceInfo` | | Changed from view_t to long | + +2. **Changes in strong video denoising implementation** + + This version adjusts the implementation of strong video denoising. + + The `VIDEO_DENOISER_LEVEL` removes `VIDEO_DENOISER_LEVEL_STRENGTH`. + + Instead, after enabling video denoising by calling `SetVideoDenoiserOptions`, you can call the `SetBeautyEffectOptions` method to enable the beauty skin smoothing feature. Using both together will achieve better video denoising effects. For strong denoising, it is recommended to set the skin smoothing parameters as detailed in `SetVideoDenoiserOptions`. + + Additionally, due to this adjustment, to achieve the best low-light enhancement effect with a focus on image quality, you need to enable video denoising first and use specific settings as detailed in `SetLowlightEnhanceOptions`. + +3. **Changes in camera plug and unplug status (macOS, Windows)** + + In previous versions, when the camera was unplugged and replugged, the `OnVideoDeviceStateChanged` callback would report the device status as `MEDIA_DEVICE_STATE_ACTIVE`(1) (device in use). Starting from this version, after the camera is replugged, the device status will change to `MEDIA_DEVICE_STATE_IDLE`(0) (device ready). + +4. **Changes in video encoding preferences** + + To enhance the user’s video interaction experience, this version optimizes the default preferences for video encoding: + + - In the `COMPRESSION_PREFERENCE` enumeration class, a new `PREFER_COMPRESSION_AUTO` (-1) enumeration is added, replacing the original `PREFER_QUALITY` (1) as the default value. In this mode, the SDK will automatically choose between `PREFER_LOW_LATENCY` or `PREFER_QUALITY` based on your video scene settings to achieve the best user experience. + - In the `DEGRADATION_PREFERENCE` enumeration class, a new `MAINTAIN_AUTO` (-1) enumeration is added, replacing the original `MAINTAIN_QUALITY` (1) as the default value. In this mode, the SDK will automatically choose between `MAINTAIN_FRAMERATE`, `MAINTAIN_BALANCED`, or `MAINTAIN_RESOLUTION` based on your video scene settings to achieve the optimal overall quality experience (QoE). + +#### New features + +1. **Live show scenario** + + This version adds the `APPLICATION_SCENARIO_LIVESHOW`(3) (Live Show) enumeration to the `VIDEO_APPLICATION_SCENARIO_TYPE`. You can call `SetVideoScenario` to set the video business scenario to show room. To meet the high requirements for first frame rendering time and image quality in this scenario, the SDK has optimized strategies to significantly improve the first frame rendering experience and image quality, while enhancing the image quality in weak network environments and on low-end devices. + +2. **Maximum frame rate for video rendering** + + This version adds the `SetLocalRenderTargetFps` and `SetRemoteRenderTargetFps` methods, which support setting the maximum frame rate for video rendering locally and remotely. The actual frame rate for video rendering by the SDK will be as close to this value as possible. + + In scenarios where the frame rate requirement for video rendering is not high (e.g., screen sharing, online education) or when the remote end uses mid-to-low-end devices, you can use this set of methods to limit the video rendering frame rate, thereby reducing CPU consumption and improving system performance. + +3. **Filter effects** + + This version introduces the `SetFilterEffectOptions` method. You can pass a cube map file (.cube) in the `config` parameter to achieve custom filter effects such as whitening, vivid, cool, black and white, etc. Additionally, the SDK provides a built-in `built_in_whiten_filter.cube` file for quickly achieving a whitening filter effect. + +4. **Local audio mixing** + + This version introduces the local audio mixing feature. You can call the `StartLocalAudioMixer` method to mix the audio streams from the local microphone, media player, sound card, and remote audio streams into a single audio stream, which can then be published to the channel. When you no longer need audio mixing, you can call the `StopLocalAudioMixer` method to stop local audio mixing. During the mixing process, you can call the `UpdateLocalAudioMixerConfiguration` method to update the configuration of the audio streams being mixed. + + Example use cases for this feature include: + + - By utilizing the local video mixing feature, the associated audio streams of the mixed video streams can be simultaneously captured and published. + - In live streaming scenarios, users can receive audio streams within the channel, mix multiple audio streams locally, and then forward the mixed audio stream to other channels. + - In educational scenarios, teachers can mix the audio from interactions with students locally and then forward the mixed audio stream to other channels. + +5. **External MediaProjection (Android)** + + This version introduces the `SetExternalMediaProjection` method, which allows you to set an external `MediaProjection` and replace the `MediaProjection` applied by the SDK. + + If you have the capability to apply for `MediaProjection` on your own, you can use this feature to achieve more flexible screen capture. + +6. **EGL context (Android)** + + This version introduces the `SetExternalRemoteEglContext` method, which is used to set the EGL context for rendering remote video streams. When using Texture format video data for remote video self-rendering, you can use this method to replace the SDK's default remote EGL context, achieving unified EGL context management. + +7. **Color space settings** + + This version adds the **colorSpace** parameter to `VideoFrame` and `ExternalVideoFrame`. You can use this parameter to set the color space properties of the video frame. By default, the color space uses Full Range and BT.709 standard configuration. You can flexibly adjust according to your own capture or rendering needs, further enhancing the customization capabilities of video processing. + +8. **Others** + + - `OnLocalVideoStateChanged` callback adds the `LOCAL_VIDEO_STREAM_REASON_DEVICE_DISCONNECTED` enumeration, indicating that the currently used video capture device has been disconnected (e.g., unplugged). (Windows) + - `MEDIA_DEVICE_STATE_TYPE` adds the `MEDIA_DEVICE_STATE_PLUGGED_IN` enumeration, indicating that the device has been plugged in. (Windows) + +#### Improvements + +1. **Virtual background algorithm optimization** + + This version upgrades the virtual background algorithm, making the segmentation between the portrait and the background more accurate. There is no background exposure, the body contour of the portrait is complete, and the detail recognition of fingers is significantly improved. Additionally, the edges between the portrait and the background are more stable, reducing edge jumping and flickering in continuous video frames. + +2. **Snapshot at specified video observation points** + + This version introduces the `TakeSnapshot [2/2]` and `TakeSnapshotEx [2/2]` methods. You can use the `config` parameter when calling these methods to take snapshots at specified video observation points, such as before encoding, after encoding, or before rendering, to achieve more flexible snapshot effects. + +3. **Custom audio capture improvements** + + This version adds the `enableAudioProcessing` member parameter to `AudioTrackConfig`, which is used to control whether to enable 3A audio processing for custom audio capture tracks of the `AUDIO_TRACK_DIRECT` type. The default value of this parameter is `false`, meaning that audio processing is not enabled. Users can enable it as needed, enhancing the flexibility of custom audio processing. + +4. **Other Improvements** + + - In scenarios where Alpha transparency effects are achieved by stitching video frames and Alpha data, the rendering performance on the receiving end has been improved, effectively reducing stuttering and latency. (Android, iOS) + - Optimizes the logic for calling `QueryDeviceScore` to obtain device score levels, improving the accuracy of the score results. + - Supports using virtual cameras in YV12 format as video capture devices. (Windows) + - When calling `SwitchSrc` to switch between live streams or on-demand streams of different resolutions, smooth and seamless switching can be achieved. An automatic retry mechanism has been added in case of switching failures. The SDK will automatically retry 3 times after a failure. If it still fails, the `OnPlayerEvent` callback will report the `PLAYER_EVENT_SWITCH_ERROR` event, indicating an error occurred during media resource switching. + - When calling `SetPlaybackSpeed` to set the playback speed of an audio file, the minimum supported speed is 0.3x. + +#### Bug fixes + +This version fixes the following issues: + +- When calling `StartScreenCaptureByWindowId` to share the screen, the window capture area specified by **regionRect** was inaccurate, resulting in incorrect width and height of the screen sharing window seen by the receiving end. (Windows) +- When the video source type of the sender is in JPEG format, the frame rate on the receiving end occasionally falls below expectations. (Android, iOS) +- During audio and video interaction, after being interrupted by a system call, the user volume reported by the `OnAudioVolumeIndication` callback was incorrect. (Android) +- When the receiving end subscribes to the video small stream by default and does not automatically subscribe to any video stream when joining the channel, calling `MuteRemoteVideoStream``(uid, false)` after joining the channel to resume receiving the video stream results in receiving the video large stream, which is not as expected. (Android) +- Occasional errors of not finding system files during audio and video interaction on Windows 7 systems. (Windows) +- When calling `FollowSystemRecordingDevice` or `FollowSystemPlaybackDevice` to set the audio capture or playback device used by the SDK to not follow the system default audio playback device, the local audio state callback `OnLocalAudioStateChanged` is not triggered when the audio device is removed, which is not as expected. (Windows) +- Occasional instances where the receiving end cannot hear the sender during audio and video interaction. (iOS) +- During audio and video interaction, if the sender's device system version is iOS 17, the receiving end occasionally cannot hear the sender. (iOS) +- In live streaming scenarios, the time taken to reconnect to the live room after the audience end disconnects due to network switching is longer than expected. (iOS) +- No sound when playing online media resources using the media player after the app starts. (iOS) +- Occasional instances of no sound in audio capture after resuming from being interrupted by other system apps during audio and video interaction. (iOS) +- Calling `StartAudioMixing [1/2]` and then immediately calling `PauseAudioMixing` to pause the music file playback does not take effect. +- Occasional crashes during audio and video interaction. (Android) + ## v4.4.0 This version was released on August x, 2024. From 9b93e71127f171d3c624501dd4042bcd07ccb955 Mon Sep 17 00:00:00 2001 From: Suri539 Date: Mon, 25 Nov 2024 17:04:01 +0800 Subject: [PATCH 2/6] flutter 450 release --- en-US/dita/RTC-NG/release/release_notes.dita | 2 +- .../en-US/framework/release_flutter_ng.md | 129 ++++++++++++++---- .../en-US/framework/release_unity_ng.md | 2 +- 3 files changed, 108 insertions(+), 25 deletions(-) diff --git a/en-US/dita/RTC-NG/release/release_notes.dita b/en-US/dita/RTC-NG/release/release_notes.dita index 924cdbe483c..7d009d26396 100644 --- a/en-US/dita/RTC-NG/release/release_notes.dita +++ b/en-US/dita/RTC-NG/release/release_notes.dita @@ -178,7 +178,7 @@ - Bug fixes + Issues fixed

This version fixes the following issues:

    diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_flutter_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_flutter_ng.md index 3085667681c..3d177abb911 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_flutter_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_flutter_ng.md @@ -1,28 +1,42 @@ ## Known issues and limitations -**Android 14 screen sharing issue (Android)** - -On Android 14 devices (such as OnePlus 11), screen sharing may not be available when `targetSdkVersion` is set to 34. For example, half of the shared screen may be black. To avoid this issue, Agora recommends setting `targetSdkVersion` to 34 or below. However, this may cause the screen sharing process to be interrupted when switching between portrait and landscape mode. In this case, a window will pop up on the device asking if you want to start recording the screen. After confirming, you can resume screen sharing. - **AirPods Pro Bluetooth connection issue (iOS)** AirPods Pro does not support A2DP protocol in communication audio mode, which may lead to connection failure in that mode. +## v6.5.0 +This version was released on November x, 2024. +#### Compatibility changes -## v6.4.0 +This version includes optimizations to some features, including changes to SDK behavior, API renaming and deletion. To ensure normal operation of the project, update the code in the app after upgrading to this release. -This version was released on August x, 2024. +1. **Changes in strong video denoising implementation** -#### Compatibility changes + This version adjusts the implementation of strong video denoising. -This version includes optimizations to some features, including changes to SDK behavior, API renaming and deletion. To ensure normal operation of the project, update the code in the app after upgrading to this release. + The `VideoDenoiserLevel` removes `videoDenoiserLevelStrength`. + + Instead, after enabling video denoising by calling `setVideoDenoiserOptions`, you can call the `setBeautyEffectOptions` method to enable the beauty skin smoothing feature. Using both together will achieve better video denoising effects. For strong denoising, it is recommended to set the skin smoothing parameters as detailed in `setVideoDenoiserOptions`. + + Additionally, due to this adjustment, to achieve the best low-light enhancement effect with a focus on image quality, you need to enable video denoising first and use specific settings as detailed in `setLowlightEnhanceOptions`. + +2. **Changes in camera plug and unplug status (macOS, Windows)** + + In previous versions, when the camera was unplugged and replugged, the `onVideoDeviceStateChanged` callback would report the device status as mediaDeviceStateActive(1) (device in use). Starting from this version, after the camera is replugged, the device status will change to `mediaDeviceStateIdle`(0) (device ready). + +3. **Changes in video encoding preferences** + + To enhance the user’s video interaction experience, this version optimizes the default preferences for video encoding: -1. To distinguish context information in different extension callbacks, this version removes the original extension callbacks and adds corresponding callbacks that contain context information (see the table below). You can identify the extension name, the user ID, and the service provider name through `ExtensionContext` in each callback. + - In the `CompressionPreference` enumeration class, a new `preferCompressionAuto` (-1) enumeration is added, replacing the original `preferQuality` (1) as the default value. In this mode, the SDK will automatically choose between `preferLowLatency` or `preferQuality` based on your video scene settings to achieve the best user experience. + - In the `DegradationPreference` enumeration class, a new `maintainAuto` (-1) enumeration is added, replacing the original `maintainQuality` (1) as the default value. In this mode, the SDK will automatically choose between `maintainFramerate`, `maintainBalanced`, or `maintainResolution` based on your video scene settings to achieve the optimal overall quality experience (QoE). - | Original callback | Current callback | - | ------------------ | ------------------------------- | +4. To distinguish context information in different extension callbacks, this version removes the original extension callbacks and adds corresponding callbacks that contain context information (see the table below). You can identify the extension name, the user ID, and the service provider name through `ExtensionContext` in each callback. + + | Original callback | Current callback | + | -------------------- | ------------------------------- | | `onExtensionEvent` | `onExtensionEventWithContext` | | `onExtensionStarted` | `onExtensionStartedWithContext` | | `onExtensionStopped` | `onExtensionStoppedWithContext` | @@ -30,32 +44,90 @@ This version includes optimizations to some features, including changes to SDK b #### New features -1. **Voice AI tuner** +1. **Live show scenario** + + This version adds the `applicationScenarioLiveshow`(3) (Live Show) enumeration to the `VideoApplicationScenarioType`. You can call `setVideoScenario` to set the video business scenario to show room. To meet the high requirements for first frame rendering time and image quality in this scenario, the SDK has optimized strategies to significantly improve the first frame rendering experience and image quality, while enhancing the image quality in weak network environments and on low-end devices. + +2. **Maximum frame rate for video rendering** + + This version adds the `setLocalRenderTargetFps` and `setRemoteRenderTargetFps` methods, which support setting the maximum frame rate for video rendering locally and remotely. The actual frame rate for video rendering by the SDK will be as close to this value as possible. + + In scenarios where the frame rate requirement for video rendering is not high (e.g., screen sharing, online education) or when the remote end uses mid-to-low-end devices, you can use this set of methods to limit the video rendering frame rate, thereby reducing CPU consumption and improving system performance. + +3. **Filter effects** + + This version introduces the `setFilterEffectOptions` method. You can pass a cube map file (.cube) in the `config` parameter to achieve custom filter effects such as whitening, vivid, cool, black and white, etc. Additionally, the SDK provides a built-in `built_in_whiten_filter.cube` file for quickly achieving a whitening filter effect. + +4. **Local audio mixing** + + This version introduces the local audio mixing feature. You can call the `startLocalAudioMixer` method to mix the audio streams from the local microphone, media player, sound card, and remote audio streams into a single audio stream, which can then be published to the channel. When you no longer need audio mixing, you can call the `stopLocalAudioMixer` method to stop local audio mixing. During the mixing process, you can call the `updateLocalAudioMixerConfiguration` method to update the configuration of the audio streams being mixed. + + Example use cases for this feature include: + + - By utilizing the local video mixing feature, the associated audio streams of the mixed video streams can be simultaneously captured and published. + - In live streaming scenarios, users can receive audio streams within the channel, mix multiple audio streams locally, and then forward the mixed audio stream to other channels. + - In educational scenarios, teachers can mix the audio from interactions with students locally and then forward the mixed audio stream to other channels. + +5. **External MediaProjection (Android)** + + This version introduces the `setExternalMediaProjection` method, which allows you to set an external `MediaProjection` and replace the `MediaProjection` applied by the SDK. + + If you have the capability to apply for `MediaProjection` on your own, you can use this feature to achieve more flexible screen capture. + +6. **EGL context (Android)** + + This version introduces the `setExternalRemoteEglContext` method, which is used to set the EGL context for rendering remote video streams. When using Texture format video data for remote video self-rendering, you can use this method to replace the SDK's default remote EGL context, achieving unified EGL context management. + +7. **Color space settings** + + This version adds the **colorSpace** parameter to `VideoFrame` and `ExternalVideoFrame`. You can use this parameter to set the color space properties of the video frame. By default, the color space uses Full Range and BT.709 standard configuration. You can flexibly adjust according to your own capture or rendering needs, further enhancing the customization capabilities of video processing. + +8. **Voice AI tuner** This version introduces the voice AI tuner feature, which can enhance the sound quality and tone, similar to a physical sound card. You can enable the voice AI tuner feature by calling the `enableVoiceAITuner` method and passing in the sound effect types supported in the `VoiceAiTunerType` enum to achieve effects like deep voice, cute voice, husky singing voice, etc. -2. **1v1 video call scenario** +9. **1v1 video call scenario** This version adds `applicationScenario1v1` (1v1 video call) in `VideoApplicationScenarioType`. You can call `setVideoScenario` to set the video application scenario to 1v1 video call, the SDK optimizes performance to achieve low latency and high video quality, enhancing image quality, first frame rendering, latency on mid-to-low-end devices, and smoothness under poor network conditions. +10. **Others** + + - `onLocalVideoStateChanged` callback adds the `localVideoStreamReasonDeviceDisconnected` enumeration, indicating that the currently used video capture device has been disconnected (e.g., unplugged). (Windows) + - `MediaDeviceStateType` adds the `mediaDeviceStatePluggedIn` enumeration, indicating that the device has been plugged in. (Windows) + #### Improvements -1. **Adaptive hardware decoding support (Android, Windows)** +1. **Virtual background algorithm optimization** + + This version upgrades the virtual background algorithm, making the segmentation between the portrait and the background more accurate. There is no background exposure, the body contour of the portrait is complete, and the detail recognition of fingers is significantly improved. Additionally, the edges between the portrait and the background are more stable, reducing edge jumping and flickering in continuous video frames. + +2. **Snapshot at specified video observation points** + + This version introduces the `takeSnapshotWithConfig` and `takeSnapshotWithConfigEx` methods. You can use the `config` parameter when calling these methods to take snapshots at specified video observation points, such as before encoding, after encoding, or before rendering, to achieve more flexible snapshot effects. + +3. **Custom audio capture improvements** + + This version adds the `enableAudioProcessing` member parameter to `AudioTrackConfig`, which is used to control whether to enable 3A audio processing for custom audio capture tracks of the `AUDIO_TRACK_DIRECT` type. The default value of this parameter is `false`, meaning that audio processing is not enabled. Users can enable it as needed, enhancing the flexibility of custom audio processing. + +4. **Adaptive hardware decoding support (Android, Windows)** This release introduces adaptive hardware decoding support, enhancing rendering smoothness on low-end devices and effectively reducing system load. -2. **Rendering performance enhancement (Windows)** +5. **Rendering performance enhancement (Windows)** DirectX 11 renderer is now enabled by default on Windows devices, providing high-performance and high-quality graphics rendering capabilities. -3. **Facial region beautification** +6. **Facial region beautification** To avoid losing details in non-facial areas during heavy skin smoothing, this version improves the skin smoothing algorithm. The SDK now recognizes various parts of the face, applying smoothing to facial skin areas excluding the mouth, eyes, and eyebrows. In addition, the SDK supports smoothing up to two faces simultaneously. -4. **Other improvements** - - This version also includes the following improvements: +7. **Other Improvements** + - In scenarios where Alpha transparency effects are achieved by stitching video frames and Alpha data, the rendering performance on the receiving end has been improved, effectively reducing stuttering and latency. (Android, iOS) + - Optimizes the logic for calling `queryDeviceScore` to obtain device score levels, improving the accuracy of the score results. + - Supports using virtual cameras in YV12 format as video capture devices. (Windows) + - When calling `switchSrc` to switch between live streams or on-demand streams of different resolutions, smooth and seamless switching can be achieved. An automatic retry mechanism has been added in case of switching failures. The SDK will automatically retry 3 times after a failure. If it still fails, the `onPlayerEvent` callback will report the `playerEventSwitchError` event, indicating an error occurred during media resource switching. + - When calling `setPlaybackSpeed` to set the playback speed of an audio file, the minimum supported speed is 0.3x. - Optimizes transmission strategy: calling `enableInstantMediaRendering` no longer impacts the security of the transmission link. - The `localVideoStreamReasonScreenCaptureDisplayDisconnected` enumerator is added in `onLocalVideoStateChanged` callback, indicating that the display used for screen capture has been disconnected. (Windows, macOS) - Optimizes the video link for window sharing, reducing CPU usage. (macOS) @@ -65,16 +137,27 @@ This version includes optimizations to some features, including changes to SDK b #### Issues fixed -This release fixed the following issues: - +This version fixes the following issues: + +- When calling `startScreenCaptureByWindowId` to share the screen, the window capture area specified by **regionRect** was inaccurate, resulting in incorrect width and height of the screen sharing window seen by the receiving end. (Windows) +- When the video source type of the sender is in JPEG format, the frame rate on the receiving end occasionally falls below expectations. (Android, iOS) +- During audio and video interaction, after being interrupted by a system call, the user volume reported by the `onAudioVolumeIndication` callback was incorrect. (Android) +- When the receiving end subscribes to the video small stream by default and does not automatically subscribe to any video stream when joining the channel, calling `muteRemoteVideoStream(uid, false)` after joining the channel to resume receiving the video stream results in receiving the video large stream, which is not as expected. (Android) +- Occasional errors of not finding system files during audio and video interaction on Windows 7 systems. (Windows) +- When calling `followSystemRecordingDevice` or `followSystemPlaybackDevice` to set the audio capture or playback device used by the SDK to not follow the system default audio playback device, the local audio state callback `onLocalAudioStateChanged` is not triggered when the audio device is removed, which is not as expected. (Windows) +- Occasional instances where the receiving end cannot hear the sender during audio and video interaction. (iOS) +- During audio and video interaction, if the sender's device system version is iOS 17, the receiving end occasionally cannot hear the sender. (iOS) +- In live streaming scenarios, the time taken to reconnect to the live room after the audience end disconnects due to network switching is longer than expected. (iOS) +- No sound when playing online media resources using the media player after the app starts. (iOS) +- Occasional instances of no sound in audio capture after resuming from being interrupted by other system apps during audio and video interaction. (iOS) +- Calling `startAudioMixing`and then immediately calling `pauseAudioMixing` to pause the music file playback does not take effect. +- Occasional crashes during audio and video interaction. (Android) - Occasional app crashes occurred when multiple remote users joined the channel simultaneously during real-time interaction. (iOS) - Remote video occasionally froze or displayed corrupted images when the app returned to the foreground after being in the background for a while. (iOS) - After the sender called `startDirectCdnStreaming` to start direct CDN streaming, frequent switching or toggling of the network occasionally resulted in a black screen on the receiver's end without a streaming failure callback on the sender's end. (iOS) - Audio playback failed when pushing external audio data using `pushAudioFrame` and the sample rate was not set as a recommended value, such as 22050 Hz and 11025 Hz. (Android, iOS) - - ## v6.3.2 This version was released on May x, 20xx. diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md index 2602ef4d8b6..13edb452335 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md @@ -110,7 +110,7 @@ This version includes optimizations to some features, including changes to SDK b - When calling `SwitchSrc` to switch between live streams or on-demand streams of different resolutions, smooth and seamless switching can be achieved. An automatic retry mechanism has been added in case of switching failures. The SDK will automatically retry 3 times after a failure. If it still fails, the `OnPlayerEvent` callback will report the `PLAYER_EVENT_SWITCH_ERROR` event, indicating an error occurred during media resource switching. - When calling `SetPlaybackSpeed` to set the playback speed of an audio file, the minimum supported speed is 0.3x. -#### Bug fixes +#### Issues fixed This version fixes the following issues: From dd098a149f298e3ef464c9c5bcfb80e7b63ad58c Mon Sep 17 00:00:00 2001 From: chenmeng Date: Wed, 27 Nov 2024 16:05:38 +0800 Subject: [PATCH 3/6] add bp and unreal 450 release notes --- en-US/dita/RTC-NG/release/release_notes.dita | 5 +- .../en-US/framework/release_bluerprint_ng.md | 85 ++++++++++++- .../en-US/framework/release_unreal_ng.md | 117 +++++++++++++++++- 3 files changed, 197 insertions(+), 10 deletions(-) diff --git a/en-US/dita/RTC-NG/release/release_notes.dita b/en-US/dita/RTC-NG/release/release_notes.dita index 7d009d26396..360b3277eba 100644 --- a/en-US/dita/RTC-NG/release/release_notes.dita +++ b/en-US/dita/RTC-NG/release/release_notes.dita @@ -195,8 +195,9 @@
  • In live streaming scenarios, the time taken to reconnect to the live room after the audience end disconnects due to network switching is longer than expected. (iOS)
  • No sound when playing online media resources using the media player after the app starts. (iOS)
  • Occasional instances of no sound in audio capture after resuming from being interrupted by other system apps during audio and video interaction. (iOS)
  • -
  • Calling and then immediately calling to pause the music file playback does not take effect.
  • -
  • Occasional crashes during audio and video interaction. (Android)
  • +
  • Calling and then immediately calling to pause the music file playback does not take effect.
  • +
  • Calling and then immediately calling to pause the music file playback does not take effect.
  • +
  • Occasional crashes during audio and video interaction. (Android)
diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_bluerprint_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_bluerprint_ng.md index c80bd08693f..9847741149a 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_bluerprint_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_bluerprint_ng.md @@ -9,6 +9,85 @@ The audio module of the Agora Unreal SDK is incompatible with the AudioMixer mod AudioMixerModuleName= ``` +## v4.5.0 +This version was released on November x, 2024. + +#### Compatibility changes + +This version includes optimizations to some features, including changes to SDK behavior, API renaming and deletion. To ensure normal operation of the project, update the code in the app after upgrading to this release. + +**Changes in strong video denoising implementation** +This version adjusts the implementation of strong video denoising. The `EUABT_VIDEO_DENOISER_LEVEL` removes the `VIDEO_DENOISER_LEVEL_STRENGTH` enumeration. Instead, after enabling video denoising by calling `SetVideoDenoiserOptions`, you can call the `SetBeautyEffectOptions` method to enable the beauty skin smoothing feature. Using both together will achieve better video denoising effects. For strong denoising, it is recommended to set the skin smoothing parameters as detailed in `SetVideoDenoiserOptions`. + +Additionally, due to this adjustment, to achieve the best low-light enhancement effect with a focus on image quality, you need to enable video denoising first and use specific settings as detailed in `SetLowlightEnhanceOptions`. + +**Changes in camera plug and unplug status (macOS, Windows)** + +In previous versions, when the camera was unplugged and replugged, the OnVideoDeviceStateChanged callback would report the device status as `MEDIA_DEVICE_STATE_ACTIVE(1)` (device in use). Starting from this version, after the camera is replugged, the device status will change to `MEDIA_DEVICE_STATE_IDLE(0)` (device ready). + +**Changes in video encoding preferences** + +To enhance the user’s video interaction experience, this version optimizes the default preferences for video encoding: +- In the `ECOMPRESSION_PREFERENCE` enumeration class, a new `PREFER_COMPRESSION_AUTO (-1)` enumeration is added, replacing the original `PREFER_QUALITY (1)` as the default value. In this mode, the SDK will automatically choose between `PREFER_LOW_LATENCY` or `PREFER_QUALITY` based on your video scene settings to achieve the best user experience. +- In the `EUABT_DEGRADATION_PREFERENCE` enumeration class, a new `MAINTAIN_AUTO (-1)` enumeration is added, replacing the original `MAINTAIN_QUALITY (1)` as the default value. In this mode, the SDK will automatically choose between `MAINTAIN_FRAMERATE`, `MAINTAIN_BALANCED`, or `MAINTAIN_RESOLUTION` based on your video scene settings to achieve the optimal overall quality experience (QoE). + +#### New features + +**Live show scenario** + +This version adds the `APPLICATION_SCENARIO_LIVESHOW(3)` (Live Show) enumeration to the `EVIDEO_APPLICATION_SCENARIO_TYPE`. You can call `SetVideoScenario` to set the video business scenario to show room. In this scenario, fast video rendering and high image quality are crucial. The SDK implements several performance optimizations, such as enabling accelerated audio and video frame rendering to minimize first-frame latency (no need to call `EnableInstantMediaRendering`) to achieve better image quality and bandwidth efficiency. + +**Maximum frame rate for video rendering** + +This version adds the `SetLocalRenderTargetFps` and `SetRemoteRenderTargetFps` methods, which support setting the maximum frame rate for video rendering locally and remotely. The actual frame rate for video rendering by the SDK will be as close to this value as possible. + +In scenarios where the frame rate requirement for video rendering is not high (e.g., screen sharing, online education) or when the remote end uses mid-to-low-end devices, you can use this set of methods to limit the video rendering frame rate, thereby reducing CPU consumption and improving system performance. + +**Filter effects** + +This version introduces the `SetFilterEffectOptions` method. You can pass a cube map file (.cube) in the `config` parameter to achieve custom filter effects such as whitening, vivid, cool, black and white, etc. Additionally, the SDK provides a built-in `built_in_whiten_filter.cube` file for quickly achieving a whitening filter effect. + +**Others** + +- The `OnLocalVideoStateChanged` callback adds the `LOCAL_VIDEO_STREAM_REASON_DEVICE_DISCONNECTED` enumeration, indicating that the currently used video capture device has been disconnected (e.g., unplugged). (Windows) +- The `EMEDIA_DEVICE_STATE_TYPE` adds the `MEDIA_DEVICE_STATE_PLUGGED_IN` enumeration, indicating that the device has been plugged in. (Windows) + +#### Improvements + +**Virtual background algorithm optimization** + +This version upgrades the virtual background algorithm, making the segmentation between the portrait and the background more accurate. There is no background exposure, the body contour of the portrait is complete, and the detail recognition of fingers is significantly improved. Additionally, the edges between the portrait and the background are more stable, reducing edge jumping and flickering in continuous video frames. + +**Snapshot at specified video observation points** + +This version introduces the `TakeSnapshotWithConfig` and `TakeSnapshotWithConfigEx` methods. You can use the `config` parameter when calling these methods to take snapshots at specified video observation points, such as before encoding, after encoding, or before rendering, to achieve more flexible snapshot effects. + +**Others** + +- In scenarios where Alpha transparency effects are achieved by stitching video frames and Alpha data, the rendering performance on the receiving end has been improved, effectively reducing stuttering and latency. (Android, iOS) +- Optimizes the logic for calling `QueryDeviceScore` to obtain device score levels, improving the accuracy of the score results. +- Supports using virtual cameras in YV12 format as video capture devices. (Windows) +- When calling `SwitchSrc` to switch between live streams or on-demand streams of different resolutions, smooth and seamless switching can be achieved. An automatic retry mechanism has been added in case of switching failures. The SDK will automatically retry 3 times after a failure. If it still fails, the `OnPlayerEvent` callback will report the `PLAYER_EVENT_SWITCH_ERROR` event, indicating an error occurred during media resource switching. +- When calling `SetPlaybackSpeed` to set the playback speed of an audio file, the minimum supported speed is 0.3x. + +#### Issues fixed + +This version fixes the following issues: + +- When calling `StartScreenCaptureByWindowId` to share the screen, the window capture area specified by `regionRect` was inaccurate, resulting in incorrect width and height of the screen sharing window seen by the receiving end. (Windows) +- When the video source type of the sender is in JPEG format, the frame rate on the receiving end occasionally falls below expectations. (Android, iOS) +- During audio and video interaction, after being interrupted by a system call, the user volume reported by the `OnAudioVolumeIndication` callback was incorrect. (Android) +- When the receiving end subscribes to the video small stream by default and does not automatically subscribe to any video stream when joining the channel, calling `MuteRemoteVideoStream(uid, false)` after joining the channel to resume receiving the video stream results in receiving the video large stream, which is not as expected. (Android) +- Occasional errors of not finding system files during audio and video interaction on Windows 7 systems. (Windows) +- When calling `FollowSystemRecordingDevice` or `FollowSystemPlaybackDevice` to set the audio capture or playback device used by the SDK to not follow the system default audio playback device, the local audio state callback `OnLocalAudioStateChanged` is not triggered when the audio device is removed, which is not as expected. (Windows) +- Occasional instances where the receiving end cannot hear the sender during audio and video interaction. (iOS) +- During audio and video interaction, if the sender's device system version is iOS 17, the receiving end occasionally cannot hear the sender. (iOS) +- In live streaming scenarios, the time taken to reconnect to the live room after the audience end disconnects due to network switching is longer than expected. (iOS) +- No sound when playing online media resources using the media player after the app starts. (iOS) +- Occasional instances of no sound in audio capture after resuming from being interrupted by other system apps during audio and video interaction. (iOS) +- Calling `StartAudioMixing` and then immediately calling `PauseAudioMixing` to pause the music file playback does not take effect. +- Occasional crashes during audio and video interaction. (Android) + ## v4.4.0 This version releases on August xx, 2024. @@ -78,7 +157,7 @@ The image below demonstrates how to obtain the `EventHandler` through the [`GetE - The `CONTENT_INSPECT_IMAGE_MODERATION` enumeration is added in `ECONTENT_INSPECT_TYPE` which means using video moderation extensions from Agora Extensions Marketplace to take video screenshots and upload them. - An optional parameter `serverConfig` is added in `FContentInspectConfig`, which is for server-side configuration related to video screenshot and upload via extensions from Agora Extensions Marketplace. By configuring this parameter, you can integrate multiple third-party moderation extensions and achieve flexible control over extension switches and other features. For more details, please contact [technical support](mailto:support@agora.io). - + Additionally, this version introduces the `EnableContentInspectEx` method, which supports simultaneous screenshot and upload of multiple video streams. 6. **Wildcard token** @@ -129,9 +208,9 @@ if (Target.Platform == UnrealTargetPlatform.IOS) ``` -## v4.2.1 +## v4.2.1 -This version was released on October xx, 2023. +This version was released on October xx, 2023. This is the first release of Video SDK for Unreal Blueprint. diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_unreal_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_unreal_ng.md index 19464b000ca..240658f7327 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_unreal_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_unreal_ng.md @@ -9,6 +9,113 @@ The audio module of the Agora Unreal SDK is incompatible with the AudioMixer mod AudioMixerModuleName= ``` +## v4.5.0 + +This version was released on November x, 2024. + +#### Compatibility changes + +This version includes optimizations to some features, including changes to SDK behavior, API renaming and deletion. To ensure normal operation of the project, update the code in the app after upgrading to this release. + +**Changes in strong video denoising implementation** + +This version adjusts the implementation of strong video denoising. The `VIDEO_DENOISER_LEVEL` removes `VIDEO_DENOISER_LEVEL_STRENGTH`. Instead, after enabling video denoising by calling `setVideoDenoiserOptions`, you can call the `setBeautyEffectOptions` method to enable the beauty skin smoothing feature. Using both together will achieve better video denoising effects. For strong denoising, it is recommended to set the skin smoothing parameters as detailed in `setVideoDenoiserOptions`. + +Additionally, due to this adjustment, to achieve the best low-light enhancement effect with a focus on image quality, you need to enable video denoising first and use specific settings as detailed in `setLowlightEnhanceOptions`. + +**Changes in camera plug and unplug status (macOS, Windows)** + +In previous versions, when the camera was unplugged and replugged, the `onVideoDeviceStateChanged` callback would report the device status as `MEDIA_DEVICE_STATE_ACTIVE(1)` (device in use). Starting from this version, after the camera is replugged, the device status will change to `MEDIA_DEVICE_STATE_IDLE(0)` (device ready). + +**Changes in video encoding preferences** + +To enhance the user’s video interaction experience, this version optimizes the default preferences for video encoding: +- In the `COMPRESSION_PREFERENCE` enumeration class, a new `PREFER_COMPRESSION_AUTO (-1)` enumeration is added, replacing the original `PREFER_QUALITY (1)` as the default value. In this mode, the SDK will automatically choose between `PREFER_LOW_LATENCY` or `PREFER_QUALITY` based on your video scene settings to achieve the best user experience. +- In the `DEGRADATION_PREFERENCE` enumeration class, a new `MAINTAIN_AUTO (-1)` enumeration is added, replacing the original `MAINTAIN_QUALITY (1)` as the default value. In this mode, the SDK will automatically choose between `MAINTAIN_FRAMERATE`, `MAINTAIN_BALANCED`, or `MAINTAIN_RESOLUTION` based on your video scene settings to achieve the optimal overall quality experience (QoE). + +#### New features + +**Live show scenario** + +This version adds the `APPLICATION_SCENARIO_LIVESHOW(3)` (Live Show) enumeration to the `VIDEO_APPLICATION_SCENARIO_TYPE`. You can call `setVideoScenario` to set the video business scenario to show room. In this scenario, fast video rendering and high image quality are crucial. The SDK implements several performance optimizations, such as enabling accelerated audio and video frame rendering to minimize first-frame latency (no need to call `enableInstantMediaRendering`) to achieve better image quality and bandwidth efficiency. + +**Maximum frame rate for video rendering** + +This version adds the `setLocalRenderTargetFps` and `setRemoteRenderTargetFps` methods, which support setting the maximum frame rate for video rendering locally and remotely. The actual frame rate for video rendering by the SDK will be as close to this value as possible. + +In scenarios where the frame rate requirement for video rendering is not high (e.g., screen sharing, online education) or when the remote end uses mid-to-low-end devices, you can use this set of methods to limit the video rendering frame rate, thereby reducing CPU consumption and improving system performance. + +**Filter effects** + +This version introduces the `setFilterEffectOptions` method. You can pass a cube map file (.cube) in the `config` parameter to achieve custom filter effects such as whitening, vivid, cool, black and white, etc. Additionally, the SDK provides a built-in `built_in_whiten_filter.cube` file for quickly achieving a whitening filter effect. + +**Local audio mixing** + +This version introduces the local audio mixing feature. You can call the `startLocalAudioMixer` method to mix the audio streams from the local microphone, media player, sound card, and remote audio streams into a single audio stream, which can then be published to the channel. When you no longer need audio mixing, you can call the `stopLocalAudioMixer` method to stop local audio mixing. During the mixing process, you can call the `updateLocalAudioMixerConfiguration` method to update the configuration of the audio streams being mixed. + +Example use cases for this feature include: +- By utilizing the local video mixing feature, the associated audio streams of the mixed video streams can be simultaneously captured and published. +- In live streaming scenarios, users can receive audio streams within the channel, mix multiple audio streams locally, and then forward the mixed audio stream to other channels. +- In educational scenarios, teachers can mix the audio from interactions with students locally and then forward the mixed audio stream to other channels. + +**External MediaProjection (Android)** + +This version introduces the `setExternalMediaProjection` method, which allows you to set an external MediaProjection and replace the MediaProjection applied by the SDK. + +If you have the capability to apply for MediaProjection on your own, you can use this feature to achieve more flexible screen capture. + +**EGL context (Android)** + +This version introduces the `setExternalRemoteEglContext` method, which is used to set the EGL context for rendering remote video streams. When using Texture format video data for remote video self-rendering, you can use this method to replace the SDK's default remote EGL context, achieving unified EGL context management. + +**Color space settings** + +This version adds the `colorSpace` parameter to `VideoFrame` and `ExternalVideoFrame`. You can use this parameter to set the color space properties of the video frame. By default, the color space uses Full Range and BT.709 standard configuration. You can flexibly adjust according to your own capture or rendering needs, further enhancing the customization capabilities of video processing. + +**Others** + +- The `onLocalVideoStateChanged` callback adds the `LOCAL_VIDEO_STREAM_REASON_DEVICE_DISCONNECTED` enumeration, indicating that the currently used video capture device has been disconnected (e.g., unplugged). (Windows) +- The `MEDIA_DEVICE_STATE_TYPE` adds the `MEDIA_DEVICE_STATE_PLUGGED_IN` enumeration, indicating that the device has been plugged in. (Windows) + +#### Improvements + +**Virtual background algorithm optimization** + +This version upgrades the virtual background algorithm, making the segmentation between the portrait and the background more accurate. There is no background exposure, the body contour of the portrait is complete, and the detail recognition of fingers is significantly improved. Additionally, the edges between the portrait and the background are more stable, reducing edge jumping and flickering in continuous video frames. + +**Snapshot at specified video observation points** + +This version introduces the `takeSnapshot [2/2]` and `takeSnapshotEx [2/2]` methods. You can use the `config` parameter when calling these methods to take snapshots at specified video observation points, such as before encoding, after encoding, or before rendering, to achieve more flexible snapshot effects. + +**Custom audio capture improvements** + +This version adds the `enableAudioProcessing` member parameter to `AudioTrackConfig`, which is used to control whether to enable 3A audio processing for custom audio capture tracks of the `AUDIO_TRACK_DIRECT` type. The default value of this parameter is `false`, meaning that audio processing is not enabled. Users can enable it as needed, enhancing the flexibility of custom audio processing. + +**Other Improvements** + +- In scenarios where Alpha transparency effects are achieved by stitching video frames and Alpha data, the rendering performance on the receiving end has been improved, effectively reducing stuttering and latency. (Android, iOS) +- Optimizes the logic for calling `queryDeviceScore` to obtain device score levels, improving the accuracy of the score results. +- Supports using virtual cameras in YV12 format as video capture devices. (Windows) +- When calling `switchSrc` to switch between live streams or on-demand streams of different resolutions, smooth and seamless switching can be achieved. An automatic retry mechanism has been added in case of switching failures. The SDK will automatically retry 3 times after a failure. If it still fails, the `onPlayerEvent` callback will report the `PLAYER_EVENT_SWITCH_ERROR` event, indicating an error occurred during media resource switching. +- When calling `setPlaybackSpeed` to set the playback speed of an audio file, the minimum supported speed is 0.3x. + +#### Issues fixed + +This version fixes the following issues: + +- When calling `startScreenCaptureByWindowId` to share the screen, the window capture area specified by `regionRect` was inaccurate, resulting in incorrect width and height of the screen sharing window seen by the receiving end. (Windows) +- When the video source type of the sender is in JPEG format, the frame rate on the receiving end occasionally falls below expectations. (Android, iOS) +- During audio and video interaction, after being interrupted by a system call, the user volume reported by the `onAudioVolumeIndication` callback was incorrect. (Android) +- When the receiving end subscribes to the video small stream by default and does not automatically subscribe to any video stream when joining the channel, calling `muteRemoteVideoStream(uid, false)` after joining the channel to resume receiving the video stream results in receiving the video large stream, which is not as expected. (Android) +- Occasional errors of not finding system files during audio and video interaction on Windows 7 systems. (Windows) +- When calling `followSystemRecordingDevice` or `followSystemPlaybackDevice` to set the audio capture or playback device used by the SDK to not follow the system default audio playback device, the local audio state callback `onLocalAudioStateChanged` is not triggered when the audio device is removed, which is not as expected. (Windows) +- Occasional instances where the receiving end cannot hear the sender during audio and video interaction. (iOS) +- During audio and video interaction, if the sender's device system version is iOS 17, the receiving end occasionally cannot hear the sender. (iOS) +- In live streaming scenarios, the time taken to reconnect to the live room after the audience end disconnects due to network switching is longer than expected. (iOS) +- No sound when playing online media resources using the media player after the app starts. (iOS) +- Occasional instances of no sound in audio capture after resuming from being interrupted by other system apps during audio and video interaction. (iOS) +- Calling `startAudioMixing [1/2]` and then immediately calling `pauseAudioMixing` to pause the music file playback does not take effect. + ## v4.4.0 This version releases on August xx, 2024. @@ -67,7 +174,7 @@ In this release, the creation and destruction of engine instances use the [`Get` - The `CONTENT_INSPECT_IMAGE_MODERATION` enumeration is added in `CONTENT_INSPECT_TYPE` which means using video moderation extensions from Agora Extensions Marketplace to take video screenshots and upload them. - An optional parameter `serverConfig` is added in `ContentInspectConfig`, which is for server-side configuration related to video screenshot and upload via extensions from Agora Extensions Marketplace. By configuring this parameter, you can integrate multiple third-party moderation extensions and achieve flexible control over extension switches and other features. For more details, please contact [technical support](mailto:support@agora.io). - + Additionally, this version introduces the `enableContentInspectEx` method, which supports simultaneous screenshot and upload of multiple video streams. 6. **ID3D11Texture2D rendering (Windows)** @@ -121,13 +228,13 @@ if (Target.Platform == UnrealTargetPlatform.IOS) -## v4.2.1 +## v4.2.1 -This version was released on October xx, 2023. +This version was released on October xx, 2023. **Features** -The SDK is designed by Agora for developers who use Unreal, aiming to provide a simple and user-friendly real-time interactive solution for Unreal Engine users. +The SDK is designed by Agora for developers who use Unreal, aiming to provide a simple and user-friendly real-time interactive solution for Unreal Engine users. The SDK provides a powerful set of C++ APIs, allowing developers to easily build games or apps with real-time interactive features. It can also leverage the powerful graphics engine functionality of Unreal Engine to add various visual effects and interactive performance to the app, providing users with a more immersive experience. @@ -143,4 +250,4 @@ To integrate real-time engagement functionality into your app using Agora SDK fo Agora provides the open source [example project](https://github.com/AgoraIO-Extensions/Agora-Unreal-RTC-SDK/tree/main/Agora-Unreal-SDK-CPP-Example) on GitHub for your reference. -
If you have already integrated the 3.x version of the SDK and wish to upgrade to this version, please refer to [Migration Guide](https://github.com/AgoraIO-Extensions/Agora-Unreal-RTC-SDK/blob/main/Migration%20Guide.md) for upgrade instructions.
\ No newline at end of file +
If you have already integrated the 3.x version of the SDK and wish to upgrade to this version, please refer to [Migration Guide](https://github.com/AgoraIO-Extensions/Agora-Unreal-RTC-SDK/blob/main/Migration%20Guide.md) for upgrade instructions.
From a15093cfd7143e7a15d0b5386d0600cfff2e63de Mon Sep 17 00:00:00 2001 From: Suri539 Date: Wed, 27 Nov 2024 16:35:23 +0800 Subject: [PATCH 4/6] ++ --- .../en-US/framework/release_flutter_ng.md | 18 +++++++++++------- .../en-US/framework/release_unity_ng.md | 10 +++++++--- 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_flutter_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_flutter_ng.md index 3d177abb911..b577b646680 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_flutter_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_flutter_ng.md @@ -14,9 +14,9 @@ This version includes optimizations to some features, including changes to SDK b 1. **Changes in strong video denoising implementation** - This version adjusts the implementation of strong video denoising. + This version adjusts the implementation of strong video denoising. - The `VideoDenoiserLevel` removes `videoDenoiserLevelStrength`. + The `VideoDenoiserLevel` removes `videoDenoiserLevelStrength`. Instead, after enabling video denoising by calling `setVideoDenoiserOptions`, you can call the `setBeautyEffectOptions` method to enable the beauty skin smoothing feature. Using both together will achieve better video denoising effects. For strong denoising, it is recommended to set the skin smoothing parameters as detailed in `setVideoDenoiserOptions`. @@ -33,7 +33,11 @@ This version includes optimizations to some features, including changes to SDK b - In the `CompressionPreference` enumeration class, a new `preferCompressionAuto` (-1) enumeration is added, replacing the original `preferQuality` (1) as the default value. In this mode, the SDK will automatically choose between `preferLowLatency` or `preferQuality` based on your video scene settings to achieve the best user experience. - In the `DegradationPreference` enumeration class, a new `maintainAuto` (-1) enumeration is added, replacing the original `maintainQuality` (1) as the default value. In this mode, the SDK will automatically choose between `maintainFramerate`, `maintainBalanced`, or `maintainResolution` based on your video scene settings to achieve the optimal overall quality experience (QoE). -4. To distinguish context information in different extension callbacks, this version removes the original extension callbacks and adds corresponding callbacks that contain context information (see the table below). You can identify the extension name, the user ID, and the service provider name through `ExtensionContext` in each callback. +4. **16 KB memory page size** + +Starting from Android 15, the system adds support for 16 KB memory page size, as detailed in [Support 16 KB page sizes](https://developer.android.com/guide/practices/page-sizes). To ensure the stability and performance of the app, starting from this version, the SDK supports 16 KB memory page size, ensuring seamless operation on devices with both 4 KB and 16 KB memory page sizes, enhancing compatibility and preventing crashes. + +5. To distinguish context information in different extension callbacks, this version removes the original extension callbacks and adds corresponding callbacks that contain context information (see the table below). You can identify the extension name, the user ID, and the service provider name through `ExtensionContext` in each callback. | Original callback | Current callback | | -------------------- | ------------------------------- | @@ -383,9 +387,9 @@ This release has optimized the implementation of some functions, involving renam - The `localVideoStreamErrorEncodeFailure` enumeration has been changed to `localVideoStreamReasonCodecNotSupport`. 5. **Log encryption behavior changes** - - For security and performance reasons, as of this release, the SDK encrypts logs and no longer supports printing plaintext logs via the console. - + + For security and performance reasons, as of this release, the SDK encrypts logs and no longer supports printing plaintext logs via the console. + Refer to the following solutions for different needs: - If you need to know the API call status, please check the API logs and print the SDK callback logs yourself. - For any other special requirements, please contact [technical support](mailto:support@agora.io) and provide the corresponding encrypted logs. @@ -820,7 +824,7 @@ This release optimizes the APIs for camera and screen capture function. As of v6 **2. Video data acquisition** - The `onCaptureVideoFrame` and `onPreEncodeVideoFrame` callbacks are added with a new parameter called `sourceType`, which is used to indicate the specific video source type. -- The following callbacks are deleted. Get the video source type through the `sourceType` parameter in the `onPreEncodeVideoFrame` and `onCaptureVideoFrame` callbacks. +- The following callbacks are deleted. Get the video source type through the `sourceType` parameter in the `onPreEncodeVideoFrame` and `onCaptureVideoFrame` callbacks. - `onSecondaryPreEncodeCameraVideoFrame` (Windows) - `onScreenCaptureVideoFrame` - `onPreEncodeScreenVideoFrame` diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md index 13edb452335..3fed8a8953c 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_unity_ng.md @@ -24,9 +24,9 @@ This version includes optimizations to some features, including changes to SDK b 2. **Changes in strong video denoising implementation** - This version adjusts the implementation of strong video denoising. + This version adjusts the implementation of strong video denoising. - The `VIDEO_DENOISER_LEVEL` removes `VIDEO_DENOISER_LEVEL_STRENGTH`. + The `VIDEO_DENOISER_LEVEL` removes `VIDEO_DENOISER_LEVEL_STRENGTH`. Instead, after enabling video denoising by calling `SetVideoDenoiserOptions`, you can call the `SetBeautyEffectOptions` method to enable the beauty skin smoothing feature. Using both together will achieve better video denoising effects. For strong denoising, it is recommended to set the skin smoothing parameters as detailed in `SetVideoDenoiserOptions`. @@ -43,6 +43,10 @@ This version includes optimizations to some features, including changes to SDK b - In the `COMPRESSION_PREFERENCE` enumeration class, a new `PREFER_COMPRESSION_AUTO` (-1) enumeration is added, replacing the original `PREFER_QUALITY` (1) as the default value. In this mode, the SDK will automatically choose between `PREFER_LOW_LATENCY` or `PREFER_QUALITY` based on your video scene settings to achieve the best user experience. - In the `DEGRADATION_PREFERENCE` enumeration class, a new `MAINTAIN_AUTO` (-1) enumeration is added, replacing the original `MAINTAIN_QUALITY` (1) as the default value. In this mode, the SDK will automatically choose between `MAINTAIN_FRAMERATE`, `MAINTAIN_BALANCED`, or `MAINTAIN_RESOLUTION` based on your video scene settings to achieve the optimal overall quality experience (QoE). +5. **16 KB memory page size** + + Starting from Android 15, the system adds support for 16 KB memory page size, as detailed in [Support 16 KB page sizes](https://developer.android.com/guide/practices/page-sizes). To ensure the stability and performance of the app, starting from this version, the SDK supports 16 KB memory page size, ensuring seamless operation on devices with both 4 KB and 16 KB memory page sizes, enhancing compatibility and preventing crashes. + #### New features 1. **Live show scenario** @@ -188,7 +192,7 @@ This version includes optimizations to some features, including changes to SDK b 4. **Other improvements** - This version also includes the following improvements: + This version also includes the following improvements: - Optimizes transmission strategy: calling `EnableInstantMediaRendering` no longer impacts the security of the transmission link. - The `LOCAL_VIDEO_STREAM_REASON_SCREEN_CAPTURE_DISPLAY_DISCONNECTED` enumerator is added in `OnLocalVideoStateChanged` callback, indicating that the display used for screen capture has been disconnected. (Windows, macOS) From 43881c75268f5f10976d7f29e4cd3795b1a2a7b0 Mon Sep 17 00:00:00 2001 From: chenmeng Date: Wed, 27 Nov 2024 16:39:16 +0800 Subject: [PATCH 5/6] framework release notes --- en-US/dita/RTC-NG/config/keys-rtc-ng-links.ditamap | 7 +++++++ en-US/dita/RTC-NG/release/release_notes.dita | 12 ++++++------ .../en-US/framework/release_bluerprint_ng.md | 6 ++++++ .../en-US/framework/release_unreal_ng.md | 4 ++++ 4 files changed, 23 insertions(+), 6 deletions(-) diff --git a/en-US/dita/RTC-NG/config/keys-rtc-ng-links.ditamap b/en-US/dita/RTC-NG/config/keys-rtc-ng-links.ditamap index e7dd974544d..c9a49400338 100644 --- a/en-US/dita/RTC-NG/config/keys-rtc-ng-links.ditamap +++ b/en-US/dita/RTC-NG/config/keys-rtc-ng-links.ditamap @@ -78,5 +78,12 @@ + + + + Support 16 KB page sizes + + + diff --git a/en-US/dita/RTC-NG/release/release_notes.dita b/en-US/dita/RTC-NG/release/release_notes.dita index 360b3277eba..1698ea69b60 100644 --- a/en-US/dita/RTC-NG/release/release_notes.dita +++ b/en-US/dita/RTC-NG/release/release_notes.dita @@ -86,7 +86,7 @@
  • In the enumeration class, a new (-1) enumeration is added, replacing the original (1) as the default value. In this mode, the SDK will automatically choose between , , or based on your video scene settings to achieve the optimal overall quality experience (QoE).
  • -
  • 16 KB memory page size (Android) +
  • 16 KB memory page size (Android)

    Starting from Android 15, the system adds support for 16 KB memory page size, as detailed in . To ensure the stability and performance of the app, starting from this version, the SDK supports 16 KB memory page size, ensuring seamless operation on devices with both 4 KB and 16 KB memory page sizes, enhancing compatibility and preventing crashes.

  • @@ -119,7 +119,7 @@
  • Filter effects

    This version introduces the method. You can pass a cube map file (.cube) in the config parameter to achieve custom filter effects such as whitening, vivid, cool, black and white, etc. Additionally, the SDK provides a built-in built_in_whiten_filter.cube file for quickly achieving a whitening filter effect.

  • -
  • Local audio mixing +
  • Local audio mixing

    This version introduces the local audio mixing feature. You can call the method to mix the audio streams from the local microphone, media player, sound card, and remote audio streams into a single audio stream, which can then be published to the channel. When you no longer need audio mixing, you can call the method to stop local audio mixing. During the mixing process, you can call the method to update the configuration of the audio streams being mixed.

    Example use cases for this feature include:

      @@ -136,7 +136,7 @@
    • EGL context (Android)

      This version introduces the method, which is used to set the EGL context for rendering remote video streams. When using Texture format video data for remote video self-rendering, you can use this method to replace the SDK's default remote EGL context, achieving unified EGL context management.

    • -
    • Color space settings +
    • Color space settings

      This version adds the colorSpace parameter to and . You can use this parameter to set the color space properties of the video frame. By default, the color space uses Full Range and BT.709 standard configuration. You can flexibly adjust according to your own capture or rendering needs, further enhancing the customization capabilities of video processing.

    • Others @@ -161,15 +161,15 @@
    • Snapshot at specified video observation points

      This version introduces the and methods. You can use the config parameter when calling these methods to take snapshots at specified video observation points, such as before encoding, after encoding, or before rendering, to achieve more flexible snapshot effects.

    • -
    • Custom audio capture improvements +
    • Custom audio capture improvements

      This version adds the enableAudioProcessing member parameter to , which is used to control whether to enable 3A audio processing for custom audio capture tracks of the AUDIO_TRACK_DIRECT type. The default value of this parameter is , meaning that audio processing is not enabled. Users can enable it as needed, enhancing the flexibility of custom audio processing.

    • Other Improvements
        -
      • In scenarios where Alpha transparency effects are achieved by stitching video frames and Alpha data, the rendering performance on the receiving end has been improved, effectively reducing stuttering and latency. (Android, iOS)
      • +
      • In scenarios where Alpha transparency effects are achieved by stitching video frames and Alpha data, the rendering performance on the receiving end has been improved, effectively reducing stuttering and latency. (Android, iOS)
      • Optimizes the logic for calling to obtain device score levels, improving the accuracy of the score results.
      • Supports using virtual cameras in YV12 format as video capture devices. (Windows)
      • -
      • After calling to disable local audio capture within the channel, the mute side button on the phone can be used to mute the background sound effects played by the app. (iOS)
      • +
      • After calling to disable local audio capture within the channel, the mute side button on the phone can be used to mute the background sound effects played by the app. (iOS)
      • When calling to switch between live streams or on-demand streams of different resolutions, smooth and seamless switching can be achieved. An automatic retry mechanism has been added in case of switching failures. The SDK will automatically retry 3 times after a failure. If it still fails, the callback will report the event, indicating an error occurred during media resource switching.
      • When calling to set the playback speed of an audio file, the minimum supported speed is 0.3x.
      diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_bluerprint_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_bluerprint_ng.md index 9847741149a..f6ffaa32717 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_bluerprint_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_bluerprint_ng.md @@ -31,6 +31,12 @@ To enhance the user’s video interaction experience, this version optimizes the - In the `ECOMPRESSION_PREFERENCE` enumeration class, a new `PREFER_COMPRESSION_AUTO (-1)` enumeration is added, replacing the original `PREFER_QUALITY (1)` as the default value. In this mode, the SDK will automatically choose between `PREFER_LOW_LATENCY` or `PREFER_QUALITY` based on your video scene settings to achieve the best user experience. - In the `EUABT_DEGRADATION_PREFERENCE` enumeration class, a new `MAINTAIN_AUTO (-1)` enumeration is added, replacing the original `MAINTAIN_QUALITY (1)` as the default value. In this mode, the SDK will automatically choose between `MAINTAIN_FRAMERATE`, `MAINTAIN_BALANCED`, or `MAINTAIN_RESOLUTION` based on your video scene settings to achieve the optimal overall quality experience (QoE). + +**16 KB memory page size (Android)** + +Starting from Android 15, the system adds support for 16 KB memory page size, as detailed in [Support 16 KB page sizes](https://developer.android.com/guide/practices/page-sizes). To ensure the stability and performance of the app, starting from this version, the SDK supports 16 KB memory page size, ensuring seamless operation on devices with both 4 KB and 16 KB memory page sizes, enhancing compatibility and preventing crashes. + + #### New features **Live show scenario** diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_unreal_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_unreal_ng.md index 240658f7327..3ccda305fbe 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_unreal_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_unreal_ng.md @@ -27,6 +27,10 @@ Additionally, due to this adjustment, to achieve the best low-light enhancement In previous versions, when the camera was unplugged and replugged, the `onVideoDeviceStateChanged` callback would report the device status as `MEDIA_DEVICE_STATE_ACTIVE(1)` (device in use). Starting from this version, after the camera is replugged, the device status will change to `MEDIA_DEVICE_STATE_IDLE(0)` (device ready). +**16 KB memory page size (Android)** + +Starting from Android 15, the system adds support for 16 KB memory page size, as detailed in [Support 16 KB page sizes](https://developer.android.com/guide/practices/page-sizes). To ensure the stability and performance of the app, starting from this version, the SDK supports 16 KB memory page size, ensuring seamless operation on devices with both 4 KB and 16 KB memory page sizes, enhancing compatibility and preventing crashes. + **Changes in video encoding preferences** To enhance the user’s video interaction experience, this version optimizes the default preferences for video encoding: From 424f528a931f68be60d5ea5750578b14b637b82b Mon Sep 17 00:00:00 2001 From: Nero-Hu Date: Wed, 27 Nov 2024 18:21:55 +0800 Subject: [PATCH 6/6] RN/Electron: update en release notes --- .../en-US/framework/release_electron_ng.md | 95 ++++++++++++++- .../en-US/framework/release_rn_ng.md | 108 ++++++++++++++++++ 2 files changed, 202 insertions(+), 1 deletion(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_electron_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_electron_ng.md index 6223d4c2282..43685b145da 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_electron_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_electron_ng.md @@ -1,3 +1,96 @@ +## v4.5.0 + +This version was released on November x, 2024. + +#### Compatibility changes + +This version includes optimizations to some features, including changes to SDK behavior, API renaming and deletion. To ensure normal operation of the project, update the code in the app after upgrading to this release. + +1. **Changes in strong video denoising implementation** + + This version adjusts the implementation of strong video denoising. + + The `VideoDenoiserLevel` removes `VideoDenoiserLevelStrength`. + + Instead, after enabling video denoising by calling `setVideoDenoiserOptions`, you can call the `setBeautyEffectOptions` method to enable the beauty skin smoothing feature. Using both together will achieve better video denoising effects. For strong denoising, it is recommended to set the skin smoothing parameters as detailed in `setVideoDenoiserOptions`. + + Additionally, due to this adjustment, to achieve the best low-light enhancement effect with a focus on image quality, you need to enable video denoising first and use specific settings as detailed in `setLowlightEnhanceOptions`. + +2. **Changes in camera plug and unplug status** + + In previous versions, when the camera was unplugged and replugged, the `onVideoDeviceStateChanged` callback would report the device status as MediaDeviceStateActive(1) (device in use). Starting from this version, after the camera is replugged, the device status will change to `MediaDeviceStateIdle`(0) (device ready). + +3. **Changes in video encoding preferences** + + To enhance the user’s video interaction experience, this version optimizes the default preferences for video encoding: + + - In the `CompressionPreference` enumeration class, a new `PreferCompressionAuto` (-1) enumeration is added, replacing the original `PreferQuality` (1) as the default value. In this mode, the SDK will automatically choose between `PreferLowLatency` or `PreferQuality` based on your video scene settings to achieve the best user experience. + - In the `DegradationPreference` enumeration class, a new `MaintainAuto` (-1) enumeration is added, replacing the original `MaintainQuality` (1) as the default value. In this mode, the SDK will automatically choose between `MaintainFramerate`, `MaintainBalanced`, or `MaintainResolution` based on your video scene settings to achieve the optimal overall quality experience (QoE). + +#### New features + +1. **Live show scenario** + + This version adds the `ApplicationScenarioLiveshow`(3) (Live Show) enumeration to the `VideoApplicationScenarioType`. You can call `setVideoScenario` to set the video business scenario to show room. In this scenario, fast video rendering and high image quality are crucial. The SDK implements several performance optimizations, such as enabling accelerated audio and video frame rendering to minimize first-frame latency (no need to call `enableInstantMediaRendering`) to achieve better image quality and bandwidth efficiency. + +2. **Maximum frame rate for video rendering** + + This version adds the `setLocalRenderTargetFps` and `setRemoteRenderTargetFps` methods, which support setting the maximum frame rate for video rendering locally and remotely. The actual frame rate for video rendering by the SDK will be as close to this value as possible. + + In scenarios where the frame rate requirement for video rendering is not high (e.g., screen sharing, online education) or when the remote end uses mid-to-low-end devices, you can use this set of methods to limit the video rendering frame rate, thereby reducing CPU consumption and improving system performance. + +3. **Filter effects** + + This version introduces the `setFilterEffectOptions` method. You can pass a cube map file (.cube) in the `config` parameter to achieve custom filter effects such as whitening, vivid, cool, black and white, etc. Additionally, the SDK provides a built-in `built_in_whiten_filter.cube` file for quickly achieving a whitening filter effect. + +4. **Local audio mixing** + + This version introduces the local audio mixing feature. You can call the `startLocalAudioMixer` method to mix the audio streams from the local microphone, media player, sound card, and remote audio streams into a single audio stream, which can then be published to the channel. When you no longer need audio mixing, you can call the `stopLocalAudioMixer` method to stop local audio mixing. During the mixing process, you can call the `updateLocalAudioMixerConfiguration` method to update the configuration of the audio streams being mixed. + + Example use cases for this feature include: + + - By utilizing the local video mixing feature, the associated audio streams of the mixed video streams can be simultaneously captured and published. + - In live streaming scenarios, users can receive audio streams within the channel, mix multiple audio streams locally, and then forward the mixed audio stream to other channels. + - In educational scenarios, teachers can mix the audio from interactions with students locally and then forward the mixed audio stream to other channels. + +5. **Color space settings** + + This version adds the `colorSpace` parameter to `VideoFrame` and `ExternalVideoFrame`. You can use this parameter to set the color space properties of the video frame. By default, the color space uses Full Range and BT.709 standard configuration. You can flexibly adjust according to your own capture or rendering needs, further enhancing the customization capabilities of video processing. + +6. **Others** + + - `onLocalVideoStateChanged` callback adds the `LocalVideoStreamReasonDeviceDisconnected` enumeration, indicating that the currently used video capture device has been disconnected (e.g., unplugged). (Windows) + - `MediaDeviceStateType` adds the `MediaDeviceStatePluggedIn` enumeration, indicating that the device has been plugged in. (Windows) + +#### Improvements + +1. **Virtual background algorithm optimization** + + This version upgrades the virtual background algorithm, making the segmentation between the portrait and the background more accurate. There is no background exposure, the body contour of the portrait is complete, and the detail recognition of fingers is significantly improved. Additionally, the edges between the portrait and the background are more stable, reducing edge jumping and flickering in continuous video frames. + +2. **Snapshot at specified video observation points** + + This version introduces the `takeSnapshotWithConfig` and `takeSnapshotWithConfigEx` methods. You can use the `config` parameter when calling these methods to take snapshots at specified video observation points, such as before encoding, after encoding, or before rendering, to achieve more flexible snapshot effects. + +3. **Custom audio capture improvements** + + This version adds the `enableAudioProcessing` member parameter to `AudioTrackConfig`, which is used to control whether to enable 3A audio processing for custom audio capture tracks of the `AUDIO_TRACK_DIRECT` type. The default value of this parameter is `false`, meaning that audio processing is not enabled. Users can enable it as needed, enhancing the flexibility of custom audio processing. + +4. **Other Improvements** + + - Optimizes the logic for calling `queryDeviceScore` to obtain device score levels, improving the accuracy of the score results. + - Supports using virtual cameras in YV12 format as video capture devices. (Windows) + - When calling `switchSrc` to switch between live streams or on-demand streams of different resolutions, smooth and seamless switching can be achieved. An automatic retry mechanism has been added in case of switching failures. The SDK will automatically retry 3 times after a failure. If it still fails, the `onPlayerEvent` callback will report the `PlayerEventSwitchError` event, indicating an error occurred during media resource switching. + - When calling `setPlaybackSpeed` to set the playback speed of an audio file, the minimum supported speed is 0.3x. + +#### Issues fixed + +This version fixes the following issues: + +- When calling `startScreenCaptureByWindowId` to share the screen, the window capture area specified by `regionRect` was inaccurate, resulting in incorrect width and height of the screen sharing window seen by the receiving end. (Windows) +- Occasional errors of not finding system files during audio and video interaction on Windows 7 systems. (Windows) +- When calling `followSystemRecordingDevice` or `followSystemPlaybackDevice` to set the audio capture or playback device used by the SDK to not follow the system default audio playback device, the local audio state callback `onLocalAudioStateChanged` is not triggered when the audio device is removed, which is not as expected. (Windows) + ## v4.4.0 This version was released on July x, 2024. @@ -51,7 +144,7 @@ This version includes optimizations to some features, including changes to SDK b This version also includes the following improvements: - Optimizes transmission strategy: calling `enableInstantMediaRendering` no longer impacts the security of the transmission link. - - The ``enumerator is added in `onLocalVideoStateChanged` callback, indicating that the display used for screen capture has been disconnected. + - The `LocalVideoStreamReasonDeviceDisconnected` enumerator is added in `onLocalVideoStateChanged` callback, indicating that the display used for screen capture has been disconnected. - Optimizes the video link for window sharing, reducing CPU usage. (macOS) - Improves echo cancellation for screen sharing scenarios. - Adds the `channelId` parameter to `Metadata`, which is used to get the channel name from which the metadata is sent. diff --git a/markdown/RTC 4.x/release-notes/en-US/framework/release_rn_ng.md b/markdown/RTC 4.x/release-notes/en-US/framework/release_rn_ng.md index 9180442b02f..6fd439d7181 100644 --- a/markdown/RTC 4.x/release-notes/en-US/framework/release_rn_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/framework/release_rn_ng.md @@ -8,6 +8,114 @@ On Android 14 devices (such as OnePlus 11), screen sharing may not be available AirPods Pro does not support A2DP protocol in communication audio mode, which may lead to connection failure in that mode. +## v4.5.0 + +This version was released on November x, 2024. + +#### Compatibility changes + +This version includes optimizations to some features, including changes to SDK behavior, API renaming and deletion. To ensure normal operation of the project, update the code in the app after upgrading to this release. + +1. **Changes in strong video denoising implementation** + + This version adjusts the implementation of strong video denoising. + + The `VideoDenoiserLevel` removes `VideoDenoiserLevelStrength`. + + Instead, after enabling video denoising by calling `setVideoDenoiserOptions`, you can call the `setBeautyEffectOptions` method to enable the beauty skin smoothing feature. Using both together will achieve better video denoising effects. For strong denoising, it is recommended to set the skin smoothing parameters as detailed in `setVideoDenoiserOptions`. + + Additionally, due to this adjustment, to achieve the best low-light enhancement effect with a focus on image quality, you need to enable video denoising first and use specific settings as detailed in `setLowlightEnhanceOptions`. + +2. **Changes in video encoding preferences** + + To enhance the user’s video interaction experience, this version optimizes the default preferences for video encoding: + + - In the `CompressionPreference` enumeration class, a new `PreferCompressionAuto` (-1) enumeration is added, replacing the original `PreferQuality` (1) as the default value. In this mode, the SDK will automatically choose between `PreferLowLatency` or `PreferQuality` based on your video scene settings to achieve the best user experience. + - In the `DegradationPreference` enumeration class, a new `MaintainAuto` (-1) enumeration is added, replacing the original `MaintainQuality` (1) as the default value. In this mode, the SDK will automatically choose between `MaintainFramerate`, `MaintainBalanced`, or `MaintainResolution` based on your video scene settings to achieve the optimal overall quality experience (QoE). + +3. **16 KB memory page size (Android)** + + Starting from Android 15, the system adds support for 16 KB memory page size, as detailed in [Support 16 KB page sizes](https://developer.android.com/guide/practices/page-sizes). To ensure the stability and performance of the app, starting from this version, the SDK supports 16 KB memory page size, ensuring seamless operation on devices with both 4 KB and 16 KB memory page sizes, enhancing compatibility and preventing crashes. + +#### New features + +1. **Live show scenario** + + This version adds the `ApplicationScenarioLiveshow`(3) (Live Show) enumeration to the `VideoApplicationScenarioType`. You can call `setVideoScenario` to set the video business scenario to show room. In this scenario, fast video rendering and high image quality are crucial. The SDK implements several performance optimizations, such as enabling accelerated audio and video frame rendering to minimize first-frame latency (no need to call `enableInstantMediaRendering`) to achieve better image quality and bandwidth efficiency. + +2. **Maximum frame rate for video rendering** + + This version adds the `setLocalRenderTargetFps` and `setRemoteRenderTargetFps` methods, which support setting the maximum frame rate for video rendering locally and remotely. The actual frame rate for video rendering by the SDK will be as close to this value as possible. + + In scenarios where the frame rate requirement for video rendering is not high (e.g., screen sharing, online education) or when the remote end uses mid-to-low-end devices, you can use this set of methods to limit the video rendering frame rate, thereby reducing CPU consumption and improving system performance. + +3. **Filter effects** + + This version introduces the `setFilterEffectOptions` method. You can pass a cube map file (.cube) in the `config` parameter to achieve custom filter effects such as whitening, vivid, cool, black and white, etc. Additionally, the SDK provides a built-in `built_in_whiten_filter.cube` file for quickly achieving a whitening filter effect. + +4. **Local audio mixing** + + This version introduces the local audio mixing feature. You can call the `startLocalAudioMixer` method to mix the audio streams from the local microphone, media player, sound card, and remote audio streams into a single audio stream, which can then be published to the channel. When you no longer need audio mixing, you can call the `stopLocalAudioMixer` method to stop local audio mixing. During the mixing process, you can call the `updateLocalAudioMixerConfiguration` method to update the configuration of the audio streams being mixed. + + Example use cases for this feature include: + + - By utilizing the local video mixing feature, the associated audio streams of the mixed video streams can be simultaneously captured and published. + - In live streaming scenarios, users can receive audio streams within the channel, mix multiple audio streams locally, and then forward the mixed audio stream to other channels. + - In educational scenarios, teachers can mix the audio from interactions with students locally and then forward the mixed audio stream to other channels. + +5. **External MediaProjection (Android)** + + This version introduces the `setExternalMediaProjection` method, which allows you to set an external `MediaProjection` and replace the `MediaProjection` applied by the SDK. + + If you have the capability to apply for `MediaProjection` on your own, you can use this feature to achieve more flexible screen capture. + +6. **EGL context (Android)** + + This version introduces the `setExternalRemoteEglContext` method, which is used to set the EGL context for rendering remote video streams. When using Texture format video data for remote video self-rendering, you can use this method to replace the SDK's default remote EGL context, achieving unified EGL context management. + +7. **Color space settings** + + This version adds the **colorSpace** parameter to `VideoFrame` and `ExternalVideoFrame`. You can use this parameter to set the color space properties of the video frame. By default, the color space uses Full Range and BT.709 standard configuration. You can flexibly adjust according to your own capture or rendering needs, further enhancing the customization capabilities of video processing. + +8. **Others** + +#### Improvements + +1. **Virtual background algorithm optimization** + + This version upgrades the virtual background algorithm, making the segmentation between the portrait and the background more accurate. There is no background exposure, the body contour of the portrait is complete, and the detail recognition of fingers is significantly improved. Additionally, the edges between the portrait and the background are more stable, reducing edge jumping and flickering in continuous video frames. + +2. **Snapshot at specified video observation points** + + This version introduces the `takeSnapshotWithConfig` and `takeSnapshotWithConfigEx` methods. You can use the `config` parameter when calling these methods to take snapshots at specified video observation points, such as before encoding, after encoding, or before rendering, to achieve more flexible snapshot effects. + +3. **Custom audio capture improvements** + + This version adds the `enableAudioProcessing` member parameter to `AudioTrackConfig`, which is used to control whether to enable 3A audio processing for custom audio capture tracks of the `AUDIO_TRACK_DIRECT` type. The default value of this parameter is `false`, meaning that audio processing is not enabled. Users can enable it as needed, enhancing the flexibility of custom audio processing. + +4. **Other Improvements** + + - In scenarios where Alpha transparency effects are achieved by stitching video frames and Alpha data, the rendering performance on the receiving end has been improved, effectively reducing stuttering and latency. + - Optimizes the logic for calling `queryDeviceScore` to obtain device score levels, improving the accuracy of the score results. + - After calling `enableLocalAudio` to disable local audio capture within the channel, the mute side button on the phone can be used to mute the background sound effects played by the app. (iOS) + - When calling `switchSrc` to switch between live streams or on-demand streams of different resolutions, smooth and seamless switching can be achieved. An automatic retry mechanism has been added in case of switching failures. The SDK will automatically retry 3 times after a failure. If it still fails, the `onPlayerEvent` callback will report the `PlayerEventSwitchError` event, indicating an error occurred during media resource switching. + - When calling `setPlaybackSpeed` to set the playback speed of an audio file, the minimum supported speed is 0.3x. + +#### Issues fixed + +This version fixes the following issues: + +- When the video source type of the sender is in JPEG format, the frame rate on the receiving end occasionally falls below expectations. +- During audio and video interaction, after being interrupted by a system call, the user volume reported by the `onAudioVolumeIndication` callback was incorrect. (Android) +- When the receiving end subscribes to the video small stream by default and does not automatically subscribe to any video stream when joining the channel, calling `muteRemoteVideoStream(uid, false)` after joining the channel to resume receiving the video stream results in receiving the video large stream, which is not as expected. (Android) +- Occasional instances where the receiving end cannot hear the sender during audio and video interaction. (iOS) +- During audio and video interaction, if the sender's device system version is iOS 17, the receiving end occasionally cannot hear the sender. (iOS) +- In live streaming scenarios, the time taken to reconnect to the live room after the audience end disconnects due to network switching is longer than expected. (iOS) +- No sound when playing online media resources using the media player after the app starts. (iOS) +- Occasional instances of no sound in audio capture after resuming from being interrupted by other system apps during audio and video interaction. (iOS) +- Calling `startAudioMixing` and then immediately calling `pauseAudioMixing` to pause the music file playback does not take effect. +- Occasional crashes during audio and video interaction. (Android) + ## v4.4.0 This version was released on July x, 2024.