This repo hosts the source codes and dataset for Posture and Physical Activity Detection: Impact of Number of Sensors and Feature Type.
If you have used the codes or referred to the manuscript in your publication, please kindly cite the following paper.
Tang, Q., John, D., Thapa-Chhetry, B., Arguello, D.J. and Intille, S., 2020. Posture and Physical Activity Detection: Impact of Number of Sensors and Feature Type. Medicine & Science in Sports & Exercise. Preprint.
- Python >= 3.7
poetry
dependency management for python. Install usingpip install poetry
.git
graphviz
(optional, required to generate workflow diagram pdf)
At the root of the project folder, run
> poetry install
Run with multi-core processing and memory on a new session folder. Overwrite data or results if they are found to exist.
>> poetry run reproduce --parallel --force-fresh-data=True
You may find intermediate and publication results in ./muss_data/DerivedCrossParticipants/product_run
.
Check here for a sample reproduction results.
Run above command at the root of the repository to see the usage of the reproduction script.
>> poetry run python reproduce.py --help
The reproduct script will do the following,
- Download and unzip the raw dataset
- Compute features and convert annotations to class labels
- Run LOSO cross validations on datasets of different combinations of sensor placements
- Compute metrics from the outputs of the LOSO cross validations
- Generate publication tables and graphs
Make sure you run the reproduce
script at first.
You may want to train and save an activity recognition model using the dataset shipped with this repo. To do it, at the root of the project folder, run,
>> poetry run python train_custom_model.py
By default, it will train two dual-sensor models (DW + DA) using motion + orientation feature set, each for postures and daily activities.
>> poetry run python train_custom_model.py --help
You may config the type of feature set, the target class labels and the sensor placements used.
You may check out the two files train_custom_model.py
and run_custom_model.py
and see how different models are called and used.