Skip to content

Siren detector and alert device for use by hard of hearing and deaf drivers in noisy, urban environments

Notifications You must be signed in to change notification settings

Gabesarch/Driver-Alert

Repository files navigation

Emergency Driver Alert (EVA) software repository

This is a project led by Gabriel Sarch, Sylvester Benson-Sesay, and Phuc Do as part of the University of Rochester Biomedical Engineering Senior Design Project

The problem was pitched to us by Marlene Sutliff and Steven Barnett, UR Community/Deaf Wellness Center and Dan Brooks, President HLAA NYS Association

Problem statement: There is a need to ensure that drivers are alerted of approaching emergency vehicles so that they can quickly and safely remove themselves from the path of the emergency vehicle. It is especially a challenge for deaf, hard of hearing, and distracted drivers to identify emergency signals, which puts them at an increased risk for colliding with emergency vehicles. The focus of this project is to develop a device for use in the car that detects emergency vehicles and notifies the driver of their presence in real time.

The code in this repository can be used to train and implement a real-time siren detector

About the detector

The detector uses a standard CNN to detect sirens in urban and car noise

Video of working detector (click the image to go to video):

Alt text

CNN architecture:

  • Simple convolutional architecture able to fit on a small device (can run on CPU or small GPU)
  • Output: probability of "siren present" and "siren not present"

Training data

  • The training data is taken from UrbanSounds8k data set (https://urbansounddataset.weebly.com/urbansound8k.html), Youtube.com, as well as some field recordings taken in Rochester, NY while driving.
  • The CNN is trained on 3 second audio chunks
  • Mel-cepstral frequency coefficients (MFCCs) are extracted and used as input to the CNN
  • siren audio and background noise is randomly scrambled before each batch of training
  • Various signal-to-noise ratios between the siren and background noise are generated to increase generalization

Files in repo

There are four main files:

  1. convertWav2Txt & generateTrainingData are used to format the audio collected, split up the data into chunks, and save it into a numpy array
  2. trainSirenDetector is for setting up the CNN architecture and training it
  3. Real-Time Siren Detector is the real-time detector that can be used with any microphone

The Models/ folder contains our trained models which can be used by the Real-time Siren Detector. See CNN architecture for more info.

About

Siren detector and alert device for use by hard of hearing and deaf drivers in noisy, urban environments

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published