diff --git a/index.html b/index.html index 5499dd6..dbe9edd 100644 --- a/index.html +++ b/index.html @@ -84,7 +84,7 @@ beth

Elizabeth Hall

-

I am a recent grad from UC Davis, with a PhD in Psychology with a Vision Science focus. I worked with Joy Geng in the Integrated Attention Lab studying visual perception and memory in the human brain. I previously worked with Chris Baker in the Laboratory of Brain and Cognition at the NIH and with Doug Davidson at the Basque Center for Cognition, Brain, and Language. I spent summer 2023 as a data science intern with the Alexa Economics & Measurement team at Amazon.

+

I am a recent grad from UC Davis, with a PhD in Psychology (Vision Science focus). I worked with Joy Geng in the Integrated Attention Lab studying scene perception and memory in the human brain. I previously worked with Chris Baker in the Laboratory of Brain and Cognition at the NIH and with Doug Davidson at the Basque Center for Cognition, Brain, and Language. I spent summer 2023 as a data science intern with the Alexa Economics & Measurement team at Amazon.

ehhall @ ucdavis dot edu google scholar twitter github CV

@@ -105,7 +105,7 @@

preprints

A paper documenting our process to segment 2.8k objects across 100 real-world scenes! We share our thoughts on the "best way" to segment objects, and analyses showing that image size and perspective has a big impact on the distribution of fixations. Full tutorial coming soon!

Eye gaze during route learning in a virtual task
- Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng *co-first author
+ Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng *co-first author
PsyArXiv, 2024.
Participants studied an avatar navigating a route in Grand Theft Auto V while we tracked their eye movements. Using these, we trained a classifier to identify whether they learned the route in natural or scrambled temporal order. Those under natural conditions fixated more on the avatar and path ahead, while scrambled viewers focused more on scene landmarks like buildings and signs.

@@ -118,7 +118,7 @@

publications

We found that attending to small objects in scenes lead to significantly more boundary contraction in memory, even when other image properties were kept constant. This supports the idea that the extension/contraction in memory may reflect a bias towards an optimal viewing distance!

candace Objects are selected for attention based upon meaning during passive scene viewing
- Candace Peacock*, Elizabeth H. Hall*, John M. Henderson *co-first authors
+ Candace Peacock*, Elizabeth H. Hall*, John M. Henderson *co-first authors
Psychonomic Bulletin & Review, 2023.   Preprint   Stimuli
We looked at whether fixations were more likely to land on high-meaning objects in scenes. We found that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of object salience.