Skip to content

iOS Voice Activity Detection (VAD). Supports WebRTC VAD GMM, Silero VAD DNN, Yamnet VAD DNN models.

Notifications You must be signed in to change notification settings

baochuquan/ios-vad

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

iOS Voice Activity Detection (VAD)

iOS VAD library is designed to process audio in real-time and identify presence of human speech in audio samples that contain a mixture of speech and noise. The VAD functionality operates offline, performing all processing tasks directly on the mobile device.

The repository offers three distinct models for voice activity detection:

WebRTC VAD [1] is based on a Gaussian Mixture Model (GMM) which is known for its exceptional speed and effectiveness in distinguishing between noise and silence. However, it may demonstrate relatively lower accuracy when it comes to differentiating speech from background noise.

Silero VAD [2] is based on a Deep Neural Networks (DNN) and utilizes the ONNX Runtime Mobile for execution. It provides exceptional accuracy and achieves processing time that is very close to WebRTC VAD.

Yamnet VAD [3] is based on a Deep Neural Networks (DNN) and employs the Mobilenet_v1 depthwise-separable convolution architecture. For execution utilizes the Tensorflow Lite runtime. Yamnet VAD can predict 522 audio event classes (such as speech, music, animal sounds and etc). It was trained on AudioSet-YouTube corpus.

WebRTC VAD is lightweight (only 158 KB) and provides exceptional speed in audio processing, but it may exhibit lower accuracy compared to DNN models. WebRTC VAD can be invaluable in scenarios where a small and fast library is necessary and where sacrificing accuracy is acceptable. In situations where high accuracy is critical, models like Silero VAD and Yamnet VAD are more preferable. For more detailed insights and a comprehensive comparison between DNN and GMM, refer to the following comparison Silero VAD vs WebRTC VAD.

WebRTC VAD

Parameters

WebRTC VAD library only accepts 16-bit Mono PCM audio stream and can work with next Sample Rates, Frame Sizes and Modes.

Valid Sample Rate Valid Frame Size
8000Hz 80, 160, 240
16000Hz 160, 320, 480
32000Hz 320, 640, 960
48000Hz 480, 960, 1440
Valid Mode
NORMAL
LOW_BITRATE
AGGRESSIVE
VERY_AGGRESSIVE

Recommended parameters for WebRTC VAD:

  • Sample Rate (required) - 16KHz - The sample rate of the audio input.
  • Frame Size (required) - 320 - The frame size of the audio input.
  • Mode (required) - VERY_AGGRESSIVE - The confidence mode of the VAD model.
  • Silence Duration (optional) - 300ms - The minimum duration in milliseconds for silence segments.
  • Speech Duration (optional) - 50ms - The minimum duration in milliseconds for speech segments.

Usage

WebRTC VAD can identify speech in short audio frames, returning results for each frame. By utilizing parameters such as silenceDurationMs and speechDurationMs, you can enhance the capability of VAD, enabling the detection of prolonged utterances while minimizing false positive results during pauses between sentences.

Swift example:

    // TODO

Objective-C example:

    // TODO

An example of how to detect speech in an audio file:

    // TODO

Silero VAD

Parameters

Silero VAD library only accepts 16-bit Mono PCM audio stream and can work with next Sample Rates, Frame Sizes and Modes.

Valid Sample Rate Valid Frame Size
8000Hz 256, 512, 768
16000Hz 512, 1024, 1536
Valid Mode
OFF
NORMAL
AGGRESSIVE
VERY_AGGRESSIVE

Recommended parameters for Silero VAD:

  • Context (required) - The Context is required to facilitate reading the model file from the Android file system.
  • Sample Rate (required) - 16KHz - The sample rate of the audio input.
  • Frame Size (required) - 512 - The frame size of the audio input.
  • Mode (required) - NORMAL - The confidence mode of the VAD model.
  • Silence Duration (optional) - 300ms - The minimum duration in milliseconds for silence segments.
  • Speech Duration (optional) - 50ms - The minimum duration in milliseconds for speech segments.

Usage

Silero VAD can identify speech in short audio frames, returning results for each frame. By utilizing parameters such as silenceDurationMs and speechDurationMs, you can enhance the capability of VAD, enabling the detection of prolonged utterances while minimizing false positive results during pauses between sentences.

Swift example:

    // TODO

Objective-C example:

    // TODO

Yamnet VAD

Parameters

Yamnet VAD library only accepts 16-bit Mono PCM audio stream and can work with next Sample Rates, Frame Sizes and Modes.

Valid Sample Rate Valid Frame Size
16000Hz 243, 487, 731, 975
243, 487, 731, 975
Valid Mode
OFF
NORMAL
AGGRESSIVE
VERY_AGGRESSIVE

Recommended parameters for Yamnet VAD:

  • Context (required) - The Context is required to facilitate reading the model file from the Android file system.
  • Sample Rate (required) - 16KHz - The sample rate of the audio input.
  • Frame Size (required) - 243 - The frame size of the audio input.
  • Mode (required) - NORMAL - The confidence mode of the VAD model.
  • Silence Duration (optional) - 30ms - The minimum duration in milliseconds for silence segments.
  • Speech Duration (optional) - 30ms - The minimum duration in milliseconds for speech segments.

Usage

Yamnet VAD can identify 521 audio event classes (such as speech, music, animal sounds and etc) in small audio frames. By utilizing parameters such as silenceDurationMs and speechDurationMs and specifying sound category (ex. classifyAudio("Speech", audioData)), you can enhance the capability of VAD, enabling the detection of prolonged utterances while minimizing false positive results during pauses between sentences.

Swift example:

    // TODO

Objective-C example:

    // TODO

Requirements

iOS API

WebRTC VAD - iOS XX and later.
Silero VAD - iOS XX and later.
Yamnet VAD - iOS XX and later.

Download

CocoaPods is the only supported build configuration, so just add the dependency to your project Podfile file:

  1. Add it in your Podfile at the end of repositories:
// TODO
  1. Add one dependency from list below:

WebRTC VAD

// TODO

Silero VAD

// TODO

Yamnet VAD

// TODO

References

[1] WebRTC VAD - Voice Activity Detector from Google which is reportedly one of the best available: it's fast, modern and free. This algorithm has found wide adoption and has recently become one of the gold-standards for delay-sensitive scenarios like web-based interaction.

[2] Silero VAD - pre-trained enterprise-grade Voice Activity Detector, Number Detector and Language Classifier [email protected].

[3] Yamnet VAD - YAMNet is a pretrained deep neural network that can predicts 522 audio event classes based on the AudioSet-YouTube corpus, employing the Mobilenet_v1 depthwise-separable convolution architecture.


Bao Chuquan 2024 (c) MIT License