layout | search_exclude | image | toc | navigation_weight |
---|---|---|---|---|
home |
true |
/images/eye-gaze-large.png |
true |
1 |
We are excited to host the first-ever Gaze Meets ML workshop on December 3rd, 2022 in conjunction with NeurIPS 2022. The workshop will take place in-person at New Orleans! We’ve got a great lineup of speakers. For questions and further information please reach out to [email protected]
We would like to thank our sponsors for their support. If you are interested in sponsoring, please find more information here.
Eye gaze has proven to be a cost-efficient way to collect large-scale physiological data that can reveal the underlying human attentional patterns in real life workflows, and thus has long been explored as a signal to directly measure human-related cognition in various domains. Physiological data (including but not limited to eye gaze) offer new perception capabilities, which could be used in several ML domains, e.g., egocentric perception, embodiedAI, NLP, etc. They can help infer human perception, intentions, beliefs, goals and other cognition properties that are much needed for human-AI interactions and agent coordination. In addition, large collections of eye-tracking data have enabled data-driven modeling of human visual attention mechanisms, both for saliency or scanpath prediction, with twofold advantages: from the neuroscientific perspective to understand biological mechanisms better, from the AI perspective to equip agents with the ability to mimic or predict human behavior and improve interpretability and interactions.
With the emergence of immersive technologies, now more than any time there is a need for experts of various backgrounds (e.g., machine learning, vision, and neuroscience communities) to share expertise and contribute to a deeper understanding of the intricacies of cost-efficient human supervision signals (e.g., eye-gaze) and their utilization towards by bridging human cognition and AI in machine learning research and development. The goal of this workshop is to bring together an active research community to collectively drive progress in defining and addressing core problems in gaze-assisted machine learning.
Webpage: https://neurips.cc/virtual/2022/workshop/49990
All times are in Central Time
<style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-bd63{color:#212529;font-style:italic;text-align:left;vertical-align:top} .tg .tg-1rwr{color:#008000;text-align:left;vertical-align:top} .tg .tg-av16{color:#212529;text-align:left;vertical-align:top} .tg .tg-ndde{color:#2294E0;font-weight:bold;text-align:left;vertical-align:top} .tg .tg-w1dh{color:#212529;font-weight:bold;text-align:left;vertical-align:top} .tg .tg-0lax{text-align:left;vertical-align:top} </style>Sat 7:30 a.m. - 8:00 a.m. | Meet and Greet and Getting started (Break) | |
---|---|---|
Sat 8:00 a.m. - 8:10 a.m. |
Opening Remarks (10 mins) Organizers (Opening Remarks) | |
Sat 8:10 a.m. - 8:55 a.m. |
Learning gaze control, external attention, and internal attention since 1990-91 (Keynote) | Jürgen Schmidhuber |
Sat 9:00 a.m. - 9:30 a.m. |
Eye-tracking what's going on in the mind (Keynote) | Tobias Gerstenberg |
Sat 9:30 a.m. - 10:00 a.m. |
Neural encoding and decoding of facial movements (Keynote) | Scott Linderman |
Sat 10:00 a.m. - 10:15 a.m. |
Coffee Break (Break) | |
Sat 10:20 a.m. - 10:32 a.m. |
Electrode Clustering and Bandpass Analysis of EEG Data for Gaze Estimation (Spotlight) |
Ard Kastrati · Martyna Plomecka · Joël Küchler · Nicolas Langer · Roger Wattenhofer |
Sat 10:32 a.m. - 10:44 a.m. |
Modeling Human Eye Movements with Neural Networks in a Maze-Solving Task (Spotlight) | Jason Li · Nicholas Watters · Sandy Wang · Hansem Sohn · Mehrdad Jazayeri |
Sat 10:44 a.m. - 10:56 a.m. |
Intention Estimation via Gaze for Robot Guidance in Hierarchical Tasks (Spotlight) | Yifan Shen · Xiaoyu Mo · Vytas Krisciunas · David Hanson · Bertram Shi |
Sat 10:56 a.m. - 11:08 a.m. |
Facial Composite Generation with Iterative Human Feedback (Spotlight) | Florian Strohm · Ekta Sood · Dominike Thomas · Mihai Bace · Andreas Bulling |
Sat 11:08 a.m. - 11:20 a.m. |
Simulating Human Gaze with Neural Visual Attention (Spotlight) | Leo Schwinn · Doina Precup · Bjoern Eskofier · Dario Zanca |
Sat 11:20 a.m. - 11:50 a.m. |
Foveated Models of Visual Search and Medical Image Perception (Keynote) | Miguel Eckstein |
Sat 12:00 p.m. - 12:30 p.m. |
Lunch (Lunch and Poster Walk-Around) | |
Sat 12:30 p.m. - 1:30 p.m. |
Appearance-Based Gaze Estimation for Driver Monitoring (Poster) | Soodeh Nikan · Devesh Upadhyay |
Sat 12:30 p.m. - 1:30 p.m. |
Selection of XAI Methods Matters: Evaluation of Feature Attribution Methods for Oculomotoric Biometric Identification (Poster) | Daniel Krakowczyk · David Robert Reich · Paul Prasse · Sebastian Lapuschkin · Lena A. Jäger · Tobias Scheffer |
Sat 12:30 p.m. - 1:30 p.m. |
Time-to-Saccade metrics for real-world evaluation (Poster) | Tim Rolff · Niklas Stein · Markus Lappe · Frank Steinicke · Simone Frintrop |
Sat 12:30 p.m. - 1:30 p.m. |
Electrode Clustering and Bandpass Analysis of EEG Data for Gaze Estimation (Poster) | Ard Kastrati · Martyna Plomecka · Joël Küchler · Nicolas Langer · Roger Wattenhofer |
Sat 12:30 p.m. - 1:30 p.m. |
Skill, or Style? Classification of Fetal Sonography Eye-Tracking Data (Poster) | Clare Teng · Lior Drukker · Aris Papageorghiou · Alison Noble |
Sat 12:30 p.m. - 1:30 p.m. |
Decoding Attention from Gaze: A Benchmark Dataset and End-to-End Models (Poster) | Karan Uppal · Jaeah Kim · Shashank Singh |
Sat 12:30 p.m. - 1:30 p.m. |
Learning to count visual objects by combining "what" and "where" in recurrent memory (Poster) | Jessica Thompson · Hannah Sheahan · Christopher Summerfield |
Sat 12:30 p.m. - 1:30 p.m. |
Modeling Human Eye Movements with Neural Networks in a Maze-Solving Task (Poster) |
Jason Li · Nicholas Watters · Sandy Wang · Hansem Sohn · Mehrdad Jazayeri |
Sat 12:30 p.m. - 1:30 p.m. |
Generating Attention Maps from Eye-gaze for the Diagnosis of Alzheimer's Disease (Poster) | Carlos Antunes · Margarida Silveira |
Sat 12:30 p.m. - 1:30 p.m. |
Do They Look Where They Go? Gaze Classification During Walking (Poster) | Gianni Bremer · Niklas Stein · Markus Lappe |
Sat 12:30 p.m. - 1:30 p.m. |
Intention Estimation via Gaze for Robot Guidance in Hierarchical Tasks (Poster) | Yifan Shen · Xiaoyu Mo · Vytas Krisciunas · David Hanson · Bertram Shi |
Sat 12:30 p.m. - 1:30 p.m. |
Comparing radiologists' gaze and saliency maps generated by interpretability methods for chest x-rays (Poster) | Ricardo Bigolin Lanfredi · Ambuj Arora · Trafton Drew · Joyce Schroeder · Tolga Tasdizen |
Sat 12:30 p.m. - 1:30 p.m. |
Integrating eye gaze into machine learning using fractal curves (Poster) | Robert Ahadizad Newport · Sidong Liu · Antonio Di Ieva |
Sat 12:30 p.m. - 1:30 p.m. |
Facial Composite Generation with Iterative Human Feedback (Poster) | Florian Strohm · Ekta Sood · Dominike Thomas · Mihai Bace · Andreas Bulling |
Sat 12:30 p.m. - 1:30 p.m. |
Federated Learning for Appearance-based Gaze Estimation in the Wild (Poster) | Mayar Elfares · Zhiming Hu · Pascal Reisert · Andreas Bulling · Ralf Küsters |
Sat 12:30 p.m. - 1:30 p.m. |
Simulating Human Gaze with Neural Visual Attention (Poster) | Leo Schwinn · Doina Precup · Bjoern Eskofier · Dario Zanca |
Sat 12:30 p.m. - 1:30 p.m. |
Contrastive Representation Learning for Gaze Estimation (Poster) | Swati Jindal · Roberto Manduchi |
Sat 12:30 p.m. - 1:30 p.m. |
SecNet: Semantic Eye Completion in Implicit Field (Poster) | Yida Wang · Yiru Shen · David Joseph Tan · Federico Tombari · Sachin S Talathi |
Sat 1:30 p.m. - 2:00 p.m. |
Use of Machine Learning and Gaze Tracking to Predict Radiologists’ Decisions in Breast Cancer Detection (Keynote) | Claudia Mello-Thoms |
Sat 2:00 p.m. - 2:30 p.m. |
Gabriel A. Silva Keynote (Keynote) | Gabriel Silva |
Sat 2:30 p.m. - 3:30 p.m. |
Breakout session (Discussion within onsite small groups on preselected themes) | |
Sat 3:30 p.m. - 3:45 p.m. |
Coffee (Break) | |
Sat 3:45 p.m. - 3:57 p.m. |
Contrastive Representation Learning for Gaze Estimation (Spotlight) | Swati Jindal · Roberto Manduchi |
Sat 3:57 p.m. - 4:09 p.m. |
SecNet: Semantic Eye Completion in Implicit Field (Spotlight) | Yida Wang · Yiru Shen · David Joseph Tan · Federico Tombari · Sachin S. Talathi |
Sat 4:45 p.m. - 5:00 p.m. |
Wrap Up - Closing remarks (Closing) |
We welcome submissions that present aspects of eye-gaze in regards to cognitive science, psychophysiology and computer science, or propose methods on integrating eye gaze into machine learning. We are also looking for applications from radiology, AR/VR, autonomous driving, etc. that introduce methods and models utilizing eye gaze technology in their respective domains.
Topics of interest include but are not limited to the following:
- Understanding the neuroscience of eye-gaze and perception.
- State of the art in incorporating machine learning and eye-tracking.
- Annotation and ML supervision with eye-gaze.
- Attention mechanisms and their correlation with eye-gaze.
- Methods for gaze estimation and prediction using machine learning.
- Unsupervised ML using eye gaze information for feature importance/selection.
- Understanding human intention and goal inference.
- Using saccadic vision for ML applications.
- Use of gaze for human-AI interaction and agent coordination in multi-agent environments.
- Eye gaze used for AI, e.g., NLP, Computer Vision, RL, Explainable AI, Embodied AI, Trustworthy AI.
- Ethics of Eye Gaze in AI
- Gaze applications in cognitive psychology, radiology, neuroscience, AR/VR, autonomous cars, privacy, etc.
- Submission due:
- Reviewing starts:
- Reviewing ends:
- Notification of acceptance:
- SlideLive presentation pre-recording upload for NeurIPS (hard deadline):
- Camera ready paper:
- Workshop Date: 3rd December 2022
The workshop will feature two tracks for submission: a full, archival proceedings track with accepted papers published in the Proceedings for Machine Learning Research (PMLR) and a non-archival, extended abstract track. Submissions to either track will undergo the same double-blind peer review. Full proceedings papers can be up to 9 pages and extended abstract papers can be up to 4 pages (both excluding references and appendices). Authors of accepted extended abstracts (non-archival submissions) retain full copyright of their work, and acceptance of such a submission to Gaze Meets ML does not preclude publication of the same material in another archival venue (e.g., journal or conference).
- Open Review Submission Portal
- Submission templates:
- References and appendix should be appended into the same (single) PDF document, and do not count towards the page count.
For a list of commonly asked questions, please see FAQs
Scott W. Linderman, Ph.D. | Gabriel A. Silva, Ph.D. | Claudia Mello-Thoms, MS, Ph.D. |
Stanford | UC San Diego | University of Iowa |
Miguel P. Eckstein, Ph.D. | Tobias Gerstenberg, MSc, Ph.D. |
UC Santa Barbara | Stanford |
Ismini Lourentzou, Ph.D. | Joy Tzung-yu Wu, MD, MPH. | Satyananda Kashyap, Ph.D. | Alexandros Karargyris, Ph.D. |
Virginia Tech | Stanford, IBM Research | IBM Research | IHU Strasbourg |
Leo Anthony Celi, MD, MSc, MPH | Ban Kawas, Ph.D. | Sachin Talathi, Ph.D. |
MIT | Meta, Reality Labs Research | Meta, Reality Labs Research |
- Anna Lisa Gentile (IBM Research)
- Brendan David-John (Virginia Tech)
- Daniel Gruhl (IBM Research)
- Dario Zanca (Friedrich-Alexander-Universität Erlangen-Nürnberg)
- Efe Bozkir (University of Tuebingen)
- Ehsan Degan (IBM Research)
- G Anthony Reina (Intel)
- Georgios Exarchakis (University of Strasbourg)
- Henning Müller (HES-SO Valais)
- Hoda Eldardiry (Virginia Tech)
- Hongzhi Wang (IBM Research)
- Junwen Wu (Intel)
- Kamran Binaee (RIT)
- Ken C. L. Wong (IBM Research)
- Maria Xenochristou (Stanford University)
- Megan T deBettencourt (University of Chicago)
- Mehdi Moradi (Google)
- Neerav Karani (MIT)
- Niharika Shimona D'Souza (IBM Research)
- Nishant Rai (Stanford University)
- Peter Mattson (Google)
- Prashant Shah (Intel)
- Safa Messaoud (Qatar Computing Research Institute)
- Sameer Antani (NIH)
- Sayan Ghosal (Johns Hopkins University)
- Shiye Cao (Johns Hopkins University)
- Sivarama Krishnan Rajaraman (NIH)
- Spyridon Bakas (University of Pennsylvania)
- Szilard Vajda (Central Washington University)
- Timothy C. Sheehan (University of California, San Diego)
- Vy A. Vo (Intel)
- Wolfgang Mehringer (Friedrich-Alexander-Universität Erlangen-Nürnberg)
- We are a MICCAI endorsed event:
-
Eye gaze logo designed by Michael Chung