View on Youtube : https://youtu.be/acc7wHFKoWU
To make an Electronic Travel Aid(ETA) for visually impaired.
Independent mobility for a visually challenged person is a day-to-day problem. It is difficult for them to determine a safe path without colliding with over-hanging and protruding objects. Project VISION is an attempt to solve this very problem using computer vision and deep learning technology.
The idea is to develop an Electronic Travel Aid(ETA) that will act like artificial eyes for the blind and convey information regarding any obstacle on the user's fingertip.
The ultimate goal is to make the life of the visually impaired easier by assisting them to move around and cope with the busy world.
-
Darknet Setup on Jetson Nano
- Clone this repository into local system and change the directory.
git clone https://github.com/yogeshiitm/Project-Vision.git cd Project-Vision make
- Download the pre-trained weight
wget https://pjreddie.com/media/files/yolov3.weights
- Clone this repository into local system and change the directory.
-
Then run the object detector
-
on a single image
./Project-Vision detect cfg/yolov3.cfg yolov3.weights data/dog.jpg
Darknet prints out the objects it detected, its confidence, and how long it took to find them. We didn't compile Darknet with OpenCV so it can't display the detections directly. Instead, it saves them in predictions.png.
-
on multiple image
./Project-Vision detect cfg/yolov3.cfg yolov3.weights
Instead of supplying an image on the command line, we can leave it blank. Now we will be asked to enter an image path, so enter data/horses.jpg to have it predict boxes for that image. Once it is done it will prompt for more paths to try different images.
-
-
Changing The Detection Threshold
- By default, YOLO only displays objects detected with a confidence of .25 or higher. You can change this by passing the -thresh flag to the yolo command. For example, to display all detection you can set the threshold to 0
./Project-Vision detect cfg/yolov3.cfg yolov3.weights data/dog.jpg -thresh 0
- By default, YOLO only displays objects detected with a confidence of .25 or higher. You can change this by passing the -thresh flag to the yolo command. For example, to display all detection you can set the threshold to 0
-
Real-Time Detection on a Webcam
- To run detection on the input from a webcam, we will need to compile Darknet with CUDA and OpenCV. Then run the command:
YOLO will display the current FPS and predicted classes as well as the image with bounding boxes drawn on top of it.
./Project-Vision detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights
- To run detection on the input from a webcam, we will need to compile Darknet with CUDA and OpenCV. Then run the command:
Yolo Code Credit: https://github.com/pjreddie/darknet
View the glimpse of our project on Youtube : https://youtu.be/acc7wHFKoWU
Team Sahaay(Social Innovation Club), Center For Innovation, IIT Madras
- Vinayak Nishant Gudipaty, 2nd year Electrical, IIT Madras
- Yogesh Agarwala, 2nd year Electrical, IIT Madras
- Harshit Raj, 2nd year Mechanical, IIT Madras
- Saroopa G, 2nd year Mechanical, IIT Madras
- Anish Pophale, 1st year Chemical, IIT Madras