This code is licensed under CC BY-NC-SA 4.0 (note that some libraries are used that are distributed under a different license, see below). Commercial usage is not permitted; please contact [email protected] or [email protected] regarding commercial licensing. If you use this dataset or the code in a scientific publication, please cite the following paper:
@inproceedings{FischerECCV2018,
author = {Tobias Fischer and Hyung Jin Chang and Yiannis Demiris},
title = {{RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments}},
booktitle = {European Conference on Computer Vision},
year = {2018},
month = {September},
pages = {339--357}
}
RT-GENE was supported in part by the Samsung Global Research Outreach program, and in part by the EU Horizon 2020 Project PAL (643783-RIA).
If you use our blink estimation code, please also cite the relevant paper:
@inproceedings{CortaceroICCV2019W,
author={Kevin Cortacero and Tobias Fischer and Yiannis Demiris},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision Workshops},
title = {RT-BENE: A Dataset and Baselines for Real-Time Blink Estimation in Natural Environments},
year = {2019},
}
RT-BENE was supported by the EU Horizon 2020 Project PAL (643783-RIA) and a Royal Academy of Engineering Chair in Emerging Technologies to Yiannis Demiris.
More information can be found on the Personal Robotic Lab's website: https://www.imperial.ac.uk/personal-robotics/software/.
- Download, install, and configure ROS (full installation; we recommend the Kinectic or Melodic distributions of ROS depending on your Ubuntu version): http://wiki.ros.org/kinetic/Installation or http://wiki.ros.org/melodic/Installation
- Install additional packages for ROS:
- For kinetic:
sudo apt-get install python-catkin-tools ros-kinetic-ros-numpy ros-kinetic-camera-info-manager-py ros-kinetic-uvc-camera libcamera-info-manager-dev
- For melodic:
sudo apt-get install python-catkin-tools python-catkin-pkg ros-melodic-uvc-camera libcamera-info-manager-dev
- For kinetic:
- Install required Python packages:
- For
pip
users (we recommend using virtualenv or similar tools):pip install tensorflow-gpu numpy scipy tqdm torch torchvision Pillow dlib opencv-python
- For
conda
users (create a new environment first if you want):conda install -c conda-forge dlib tensorflow-gpu numpy scipy tqdm pillow rospkg opencv empy pytorch torchvision
- For
- Download and build RT-GENE:
cd $HOME/catkin_ws/src && git clone https://github.com/Tobias-Fischer/rt_gene.git
cd $HOME/catkin_ws && catkin build
- To use an ensemble scheme using 4 models trained on the MPII, UTMV and RT-GENE datasets, you need to adjust the
estimate_gaze.launch
file (make sure you comply with the licenses of MPII and UTMV! these model files are licensed under CC BY-NC-SA 4.0). - Open
$(rospack find rt_gene)/launch/estimate_gaze.launch
and comment out<rosparam param="model_files">['model_nets/Model_allsubjects1.h5']</rosparam>
and uncomment<!--rosparam param="model_files">['model_nets/all_subjects_mpii_prl_utmv_0_02.h5', ..., ..., ...</rosparam-->
Note that required model files are downloaded the first time that the ROS node starts. An alternative mirror can be found here; these files need to be moved into $HOME/catkin_ws/src/rt_gene/rt_gene/model_nets
.
- Follow instructions for https://github.com/code-iai/iai_kinect2
- Make sure the calibration is saved correctly (https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration#calibrating-the-kinect-one)
- Calibrate your camera (http://wiki.ros.org/camera_calibration).
- Save the resulting
*.yaml
file to$(rospack find rt_gene)/webcam_configs/
. - Change the entry for the
camera_info_url
in the$(rospack find rt_gene)/launch/start_webcam.launch
file.
roscore
roslaunch rt_gene start_kinect.launch
roslaunch rt_gene estimate_gaze.launch
roscore
roslaunch rt_gene start_webcam.launch
roslaunch rt_gene estimate_gaze.launch
roscore
roslaunch rt_gene start_video.launch
(make sure to change thecamera_info_url
andvideo_file
arguments)roslaunch rt_gene estimate_gaze.launch
roscore
roslaunch rt_gene start_rosbag.launch rosbag_file:=/path/to/rosbag.bag
(this assumes a recording with the Kinect v2 and might need adjustments)roslaunch rt_gene estimate_gaze.launch ros_frame:=kinect2_nonrotated_link
Follow the instructions for estimating gaze above, and run in addition roslaunch rt_gene estimate_blink.launch
. Note that the blink estimation relies on the extract_landmarks_node.py
node, however can run independently from the estimate_gaze.py
node.
- S3FD face detector in ./src/rt_gene/SFD; BSD 3-clause, Link to GitHub
- Kalman filter in ./src/rt_gene/kalman_stabilizer.py: MIT License, Link to GitHub
- Face alignment ./src/rt_gene/tracker_generic.py: MIT License, Link to Adrian Rosebrock's Blog on Face Alignment (Accessed 1 April 2020 on PyImageSearch)
- Yin Guobing's image utilities; MIT License, Link to GitHub 1, Link to GitHub 2
- ROS; BSD 3-clause, Link to website
- Tensorflow; Apache License 2.0, Link to website
- 3DDFA face landmark extraction in ./src/rt_gene/ThreeDDFA; MIT License, Link to GitHub, Link to paper
- OpenCV; 3-clause BSD License, Link to website
- Matplotlib; Matplotlib License, Link to website
- TQDM; Mozilla Public Licence and MIT License, Link to website
- Pillow; PIL Software License (MIT-like), Link to website
- Numpy; 3-clause BSD License, Link to website
- Pytorch; 3-clause BSD License, Link to website
- TF transforms; MIT License, Link to GitHub
- dlib; Boost Software License, Link to website