Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
ehhall authored Dec 23, 2024
1 parent 4f861cb commit 71ecda1
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@
<img src="images/beth_rome.jpg" alt="beth" class="center">

<h1 style="font-size:3vw">Elizabeth Hall</h1>
<P class="blocktext"> I am a recent grad from UC Davis, with a PhD in Psychology with a Vision Science focus. I worked with Joy Geng in the <a style="text-decoration: none;" href="http://genglab.ucdavis.edu/"> Integrated Attention Lab</a> studying visual perception and memory in the human brain. I previously worked with Chris Baker in the <a style="text-decoration: none;" href="https://www.nimh.nih.gov/research/research-conducted-at-nimh/research-areas/clinics-and-labs/lbc/slp/index.shtml"> Laboratory of Brain and Cognition</a> at the NIH and with Doug Davidson at the <a style="text-decoration: none;" href="https://www.bcbl.eu/en"> Basque Center for Cognition, Brain, and Language.</a> I spent summer 2023 as a data science intern with the Alexa Economics & Measurement team at Amazon.</P>
<P class="blocktext"> I am a recent grad from UC Davis, with a PhD in Psychology (Vision Science focus). I worked with Joy Geng in the <a style="text-decoration: none;" href="http://genglab.ucdavis.edu/"> Integrated Attention Lab</a> studying scene perception and memory in the human brain. I previously worked with Chris Baker in the <a style="text-decoration: none;" href="https://www.nimh.nih.gov/research/research-conducted-at-nimh/research-areas/clinics-and-labs/lbc/slp/index.shtml"> Laboratory of Brain and Cognition</a> at the NIH and with Doug Davidson at the <a style="text-decoration: none;" href="https://www.bcbl.eu/en"> Basque Center for Cognition, Brain, and Language.</a> I spent summer 2023 as a data science intern with the Alexa Economics & Measurement team at Amazon.</P>

<P class="blocktext1"> ehhall @ ucdavis dot edu <span class="tab"> <a style="text-decoration: none;" href="https://scholar.google.com/citations?user=YYpSEMUAAAAJ&hl=en">google scholar</a> <span class="tab"> <a style="text-decoration: none;" href="https://twitter.com/vision_beth">twitter</a> <span class="tab"> <a style="text-decoration: none;" href="https://github.com/ehhall">github</a> <span class="tab"> <a style="text-decoration: none;" href="images/HallCVLatest.pdf">CV</a> <span class="tab"> </P>

Expand All @@ -105,7 +105,7 @@ <h2> preprints </h2>
A paper documenting our process to segment 2.8k objects across 100 real-world scenes! We share our thoughts on the "best way" to segment objects, and analyses showing that image size and perspective has a big impact on the distribution of fixations. Full tutorial coming soon! </P>

<P class="blocktext2"> <a href="https://osf.io/preprints/psyarxiv/72np4"> <img src="images/classifyimage.jpg" alt="" class="left"> </a><B> <a style="text-decoration: none;" href=https://osf.io/preprints/psyarxiv/72np4">Eye gaze during route learning in a virtual task</B> </a> <br>
<I> Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng</I> *co-first author<br>
<I> Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng</I> *co-first author<br>
PsyArXiv, 2024. <br>
Participants studied an avatar navigating a route in Grand Theft Auto V while we tracked their eye movements. Using these, we trained a classifier to identify whether they learned the route in natural or scrambled temporal order. Those under natural conditions fixated more on the avatar and path ahead, while scrambled viewers focused more on scene landmarks like buildings and signs. </P>

Expand All @@ -118,7 +118,7 @@ <h2> publications </h2>
We found that attending to small objects in scenes lead to significantly more boundary contraction in memory, even when other image properties were kept constant. This supports the idea that the extension/contraction in memory may reflect a bias towards an optimal viewing distance!</P>

<P class="blocktext2"> <a href="https://link.springer.com/article/10.3758/s13423-023-02286-2"> <img src="images/candace.jpg" alt="candace" class="left"> </a><B> <a style="text-decoration: none;" href="https://link.springer.com/article/10.3758/s13423-023-02286-2">Objects are selected for attention based upon meaning during passive scene viewing</B> </a> <br>
<I> Candace Peacock*, Elizabeth H. Hall*, John M. Henderson</I> *co-first authors<br>
<I> Candace Peacock*, Elizabeth H. Hall*, John M. Henderson</I> *co-first authors<br>
Psychonomic Bulletin & Review, 2023. <a style="text-decoration: none;" href="https://osf.io/preprints/psyarxiv/fqtvx"> &nbsp; Preprint </a> <a style="text-decoration: none;" href="https://osf.io/egry6/"> &nbsp; Stimuli </a> <br>
We looked at whether fixations were more likely to land on high-meaning objects in scenes. We found that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of object salience.</P>

Expand Down

0 comments on commit 71ecda1

Please sign in to comment.