+Windows key events
This is a fork (2023) of an English translated fork by Nikita Kiselov of the original repo by Kazuhito Takahashi.
- Added Azure Kinect as the main capture device.
- Kept webcam capture setup commented out for easy transition.
- Simplified the code.
- Refactored the code for easier development (well for me at least).
- Annotated the code further.
- Added the ability to record more hand signs.
- Added an infrared mode.
- Modified the keypoint_classification_EN.ipynb with more English.
- Modified the keypoint_classification_EN.ipynb to add more layers in the model training.
- ... and other small additions here and there.
The classification doesn't work well on IR images. I'm working on a new model to fix that issue, so it works in low light and dark conditions.
- CUDA Toolkit 11.0 (tested, but might work with newer versions)
- cuDNN 8.1.1 (tested, but might work with newer versions)
- Python 3.10.6 (tested, but might work with newer versions)
This script is for inference and data collection.
This is a model training script for hand sign recognition.
This is a model training script for finger gesture recognition.
Open "keypoint_classification.ipynb" in Jupyter Notebook and execute from top to bottom.
To change the number of training data classes, change the value of "NUM_CLASSES = 3"
and modify the label of "model/keypoint_classifier/keypoint_classifier_label.csv" as appropriate.
Open "point_history_classification.ipynb" in Jupyter Notebook and execute from top to bottom.
To change the number of training data classes, change the value of "NUM_CLASSES = 4" and
modify the label of "model/point_history_classifier/point_history_classifier_label.csv" as appropriate.
Kazuhito Takahashi(https://twitter.com/KzhtTkhs)
Nikita Kiselov(https://github.com/kinivi)
J. Quintanilla(https://github.com/jquintanilla4)
hand-gesture-recognition-using-mediapipe is under Apache v2 license.