A simple yet strong baseline for facial AU detection:
- Extract basic AU features from a pretrained face alignment model
- Instantiate TDN to model temporal dynamics on static AU features
- Use VAE module to regulate the initial prediction
- Python 3
- PyTorch
We use RetinaFace to do face detection:
- train the VAE module on BP4D split 1, run:
python train_vae.py --data BP4D --subset 1 --weight 0.3
- train the AU-Net, run:
python train_video_vae.py --data BP4D --vae 'pretrained vae model'
- Pretrained models Test
BP4D | Average F1-score(%) |
---|---|
bp4d_split* | 65.0 |
DISFA | Average F1-score(%) |
---|---|
disfa_split* | 66.1 |
- Demo to predict 15 AUs Demo
@article{yang2023toward,
title={Toward Robust Facial Action Units’ Detection},
author={Yang, Jing and Hristov, Yordan and Shen, Jie and Lin, Yiming and Pantic, Maja},
journal={Proceedings of the IEEE},
year={2023},
publisher={IEEE}
}
This repo is built using components from JAANet and EmoNet
This project is licensed under the MIT License