diff --git a/index.html b/index.html index 5499dd6..dbe9edd 100644 --- a/index.html +++ b/index.html @@ -84,7 +84,7 @@
I am a recent grad from UC Davis, with a PhD in Psychology with a Vision Science focus. I worked with Joy Geng in the Integrated Attention Lab studying visual perception and memory in the human brain. I previously worked with Chris Baker in the Laboratory of Brain and Cognition at the NIH and with Doug Davidson at the Basque Center for Cognition, Brain, and Language. I spent summer 2023 as a data science intern with the Alexa Economics & Measurement team at Amazon.
+I am a recent grad from UC Davis, with a PhD in Psychology (Vision Science focus). I worked with Joy Geng in the Integrated Attention Lab studying scene perception and memory in the human brain. I previously worked with Chris Baker in the Laboratory of Brain and Cognition at the NIH and with Doug Davidson at the Basque Center for Cognition, Brain, and Language. I spent summer 2023 as a data science intern with the Alexa Economics & Measurement team at Amazon.
ehhall @ ucdavis dot edu google scholar twitter github CV
@@ -105,7 +105,7 @@ Eye gaze during route learning in a virtual task
- Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng *co-first author
+ Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng *co-first author
PsyArXiv, 2024.
Participants studied an avatar navigating a route in Grand Theft Auto V while we tracked their eye movements. Using these, we trained a classifier to identify whether they learned the route in natural or scrambled temporal order. Those under natural conditions fixated more on the avatar and path ahead, while scrambled viewers focused more on scene landmarks like buildings and signs.
Objects are selected for attention based upon meaning during passive scene viewing
- Candace Peacock*, Elizabeth H. Hall*, John M. Henderson *co-first authors
+ Candace Peacock*, Elizabeth H. Hall*, John M. Henderson *co-first authors
Psychonomic Bulletin & Review, 2023. Preprint Stimuli
We looked at whether fixations were more likely to land on high-meaning objects in scenes. We found that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of object salience.