Which now support 7 actions: Standing, Walking, Sitting, Lying Down, Stand up, Sit down, Fall Down.
- Python > 3.6
- Pytorch > 1.3.1
Original test run on: i7-8750H CPU @ 2.20GHz x12, GeForce RTX 2070 8GB, CUDA 10.2
This project has trained a new Tiny-YOLO oneclass model to detect only person objects and to reducing model size. Train with rotation augmented COCO person keypoints dataset for more robust person detection in a variant of angle pose.
For actions recognition used data from Le2i Fall detection Dataset (Coffee room, Home) extract skeleton-pose by AlphaPose and labeled each action frames by hand for training ST-GCN model.
- Tiny-YOLO oneclass - .pth, .cfg
- SPPE FastPose (AlphaPose) - resnet101, resnet50
- ST-GCN action recognition - tsstg
- Download all pre-trained models into ./Models folder.
- Run main.py
python main.py ${video file or camera source}