Driver distraction has been identified as one of the leading causes of accidents on the roads. Distractions can arise from various factors, such as mobile phone usage, drinking, operating instruments, facial makeup, and social interaction. The consequences of driver distraction can be severe, resulting in injuries, loss of lives, and property damage. To mitigate these risks and enhance road safety, it is crucial to develop effective methods to detect and classify driver distractions in real-time.
The aim of our project is to build a machine learning model using computer vision techniques to classify driver distractions. By leveraging the power of computer vision, we intend to develop an automated system that can accurately identify and categorize different types of distractions exhibited by drivers during their journeys. The ultimate goal is to enable real-time detection and alert mechanisms that can assist drivers in maintaining focus and minimizing the occurrence of accidents caused by distractions. Our motivation for undertaking this project stems from the potential impact it can have on enhancing road safety and reducing the number of accidents caused by driver distractions. By developing an efficient and accurate driver distraction detection system, we aim to contribute to the larger theme of improving transportation safety and reducing human errors on the roads. We believe that the application of computer vision techniques can play a significant role in addressing this problem, paving the way for more intelligent and proactive driving assistance systems.
In summary, our project aims to address the problem of driver distraction by developing a machine learning model using computer vision techniques.
https://drive.google.com/file/d/1l0gf45ZzkgQ0FgtSwyQJzfDKrlfFrxyh/view?usp=sharing
https://www.kaggle.com/competitions/state-farm-distracted-driver-detection
The dataset consists of nine different distraction classes, including activities such as texting-right, talking on the phone-right, texting-left, talking on the phone-left, operating the radio, drinking, reaching behind", hair and makeup", and talking to the passenger. We divided the dataset into a total of 15.200 trains, each train class containing 1,520 images, and a total of 3,040 tests, each test class containing 304 images.