diff --git a/index.html b/index.html index 7057205..7c84cc2 100644 --- a/index.html +++ b/index.html @@ -1214,6 +1214,179 @@
Some platforms or User Agents may provide built-in support for voice + activity detection. Web applications may want to know whether a user is + speaking when the microphone is muted, so an unmute notification could be + displayed. For that reason, we extend {{MediaStreamTrack}} with the + following properties. +
++partial dictionary MediaTrackSupportedConstraints { + boolean voiceActivityDetection = true; +};+
true
+partial dictionary MediaTrackCapabilities { + sequence<boolean> voiceActivityDetection; +};+
sequence<{{boolean}}>
If the source does not support voice activity detection, a single
+ false
is reported. If the source supports voice activity
+ detection, a list with both true
and false
+ are reported. See
+ voiceActivityDetection for additional
+ details.
+partial dictionary MediaTrackConstraintSet { + ConstrainBoolean voiceActivityDetection; +};+
true
+partial dictionary MediaTrackSettings { + boolean voiceActivityDetection; +};+
true
The following constrainable properties are defined to apply only to + audio {{MediaStreamTrack}} objects: +
+Property Name | +Values | +Notes | +
---|---|---|
+ voiceActivityDetection | +{{ConstrainBoolean}} | +
+ Voice activity detection allows web applications to be notified + when a voice activity starts. + |
+
+partial interface MediaStreamTrack { + attribute EventHandler onvoiceactivitydetected; +};+
+ Let
+ {{MediaStreamTrack/[[LastVoiceActivityDetectedTimestamp]]}}
+ be an internal slot of the {{MediaStreamTrack}}, initialized as
+ undefined
.
+
The onvoiceactivitydetected + attribute is an [=event handler IDL attribute=] for the + `onvoiceactivitydetected` [=event handler=], whose + [=event handler event type=] is + voiceactivitydetected. +
++
When the [=User Agent=] detects a voice activity is started in a + track's underlying source, the [=User Agent=] MUST run the + following steps:
+If {{voiceActivityDetection}} setting of track is set
+ to false
by the ApplyConstraints algorithm, abort
+ these steps.
If
+ track.{{MediaStreamTrack/[[LastVoiceActivityDetectedTimestamp]]}}
+ is not undefined
, and {{Performance.now()}} -
+ track.{{MediaStreamTrack/[[LastVoiceActivityDetectedTimestamp]]}}
+ is less than voiceActivityDetectionMinimalInterval, abort
+ these steps.
[=Queue a task=] to perform the following steps:
+If track.{{MediaStreamTrack/readyState}} is + "ended", abort these steps.
[=Fire an event=] named {{voiceactivitydetected}} on + track.
+Set + track.{{MediaStreamTrack/[[LastVoiceActivityDetectedTimestamp]]}} + to {{Performance.now()}}.
++<script> +// Open microphone. +const stream = await navigator.mediaDevices.getUserMedia({ + audio: true, voiceActivityDetection: true} +); +const [audioTrack] = stream.getAudioTracks(); + +track.addEventListener("voiceactivitydetected", () => { + if (track.muted) { + // Show unmute notification. + } +}); </script>