This project consists of a machine learning model trained to interpret sign language. Specifically, it uses images of hand gestures representing different letters of the alphabet. The goal of this project is to enable more efficient communication with people who use sign language and potentially serve as a learning tool for those interested in learning sign language.
To clone and run this application, you can create a google collab account and copy the code into your new file. When running the code, ensure you follow the bolded instructions and only run code cells that are necessary. Also, ensure you download the dataset from the google drive link provided and train the model on that dataset. https://drive.google.com/drive/folders/1V3-rvHeRMR9BKh5LOqScg59eKsCp71fF?usp=sharing
To test your model once it's trained, run the code cell at the bottom to create a camera plug-in in google collab (make sure your camera on your device is working). Then, utilize the capture button that appears and the model will predict the letter you are holding up.
This project was created by Ashmit Gaba, Ananth Sriram, and Mihir Singh.