Skip to content

Latest commit

 

History

History
14 lines (10 loc) · 1.18 KB

README.md

File metadata and controls

14 lines (10 loc) · 1.18 KB

SignLanguageModel

Project Description

This project consists of a machine learning model trained to interpret sign language. Specifically, it uses images of hand gestures representing different letters of the alphabet. The goal of this project is to enable more efficient communication with people who use sign language and potentially serve as a learning tool for those interested in learning sign language.

Installation

To clone and run this application, you can create a google collab account and copy the code into your new file. When running the code, ensure you follow the bolded instructions and only run code cells that are necessary. Also, ensure you download the dataset from the google drive link provided and train the model on that dataset. https://drive.google.com/drive/folders/1V3-rvHeRMR9BKh5LOqScg59eKsCp71fF?usp=sharing

Testing

To test your model once it's trained, run the code cell at the bottom to create a camera plug-in in google collab (make sure your camera on your device is working). Then, utilize the capture button that appears and the model will predict the letter you are holding up.

Authors

This project was created by Ashmit Gaba, Ananth Sriram, and Mihir Singh.