-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
163 lines (134 loc) · 13 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
<html>
<head>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-171772141-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-171772141-1');
gtag('set', {'user_id': 'USER_ID'}); // Set the user ID using signed-in user_id.
ga('set', 'userId', 'USER_ID'); // Set the user ID using signed-in user_id.
</script>
<title> Elizabeth Hall </title>
</head>
<style>
@font-face { font-family: Montserrat; src: url('Montserrat/Montserrat-Regular.ttf'); }
h1 {text-align: center;}
h2 {padding-top: 35px;
text-align:justify;
margin-left: auto;
margin-right: auto; width: 33em}
h3 {
font-size: 15px;
}
p {text-align: center;}
div {text-align: center;}
img.center {
padding-top: 75px;
display: block;
margin-left: auto;
margin-right: auto;
border-radius: 50%;
width: 200px;
height: 200px;
}
* {
font-family: Montserrat;
}
P.blocktext {
text-align:justify;
margin-left: auto;
margin-right: auto;
width: 50em;
}
P.blocktext1 {
text-align:center;
margin-left: auto;
margin-right: auto;
width: 50em;
padding-top: 20px;
}
P.blocktext2 {
text-align: justify;
margin-left: auto;
margin-right: auto;
width: 60em;
font-size: 14px;
padding-bottom: 30px;
}
hr.striped-border { border: 1px dashed #000; width: 50%; margin: auto; margin-bottom: 2%; }
.tab {
padding-left: 30px;
}
img.left {
float: left;
padding-right: 40px;
width: 200px;
height: 140px;
}
</style>
<body>
<img src="images/beth_rome.jpg" alt="beth" class="center">
<h1 style="font-size:3vw">Elizabeth H Hall</h1>
<P class="blocktext"> I am a recent grad from UC Davis with a PhD in Psychology (Vision Science). I worked with Joy Geng in the <a style="text-decoration: none;" href="http://genglab.ucdavis.edu/"> Integrated Attention Lab</a> studying scene perception and memory in the human brain. I previously worked with Chris Baker in the <a style="text-decoration: none;" href="https://www.nimh.nih.gov/research/research-conducted-at-nimh/research-areas/clinics-and-labs/lbc/slp/index.shtml"> Laboratory of Brain and Cognition</a> at the NIH and with Doug Davidson at the <a style="text-decoration: none;" href="https://www.bcbl.eu/en"> Basque Center for Cognition, Brain, and Language.</a> I spent summer 2023 as a data science intern with the Alexa Economics & Measurement team at Amazon.</P>
<P class="blocktext1"> ehlhall1 @ gmail dot com <span class="tab"> <a style="text-decoration: none;" href="https://scholar.google.com/citations?user=YYpSEMUAAAAJ&hl=en">google scholar</a> <span class="tab"> <a style="text-decoration: none;" href="https://twitter.com/vision_beth">twitter</a> <span class="tab"> <a style="text-decoration: none;" href="https://github.com/ehhall">github</a> <span class="tab"> <a style="text-decoration: none;" href="images/HallCVLatest.pdf">CV</a> <span class="tab"> </P>
<h2> news </h2>
<hr class="striped-border">
<P class="blocktext"> 12/2023: New paper with Joy Geng on <a style="text-decoration: none;" href="https://rdcu.be/dHWZk"> object attention and boundary extension! </a></P>
<P class="blocktext"> 8/2023: I was awarded the UC President's Dissertation Year Fellowship! </P>
<P class="blocktext"> 7/2022: New paper with Zoe Loh on <a style="text-decoration: none;" href="https://psyarxiv.com/bhyex/"> working memory and fixation durations in scene-viewing! </a></P>
<P class="blocktext"> 9/2021: Two new preprints added! I got second place for <I> Best Grad Talk </I> at the Spring Psychology Conference, and I completed the <a style="text-decoration: none;" href="https://neuromatch.io/courses/">Deep Learning</a> section of Neuromatch!</a></P>
<P class="blocktext"> 9/2020: Work with Chris Baker and Wilma Bainbridge on encoding and recall of object / scenes in 7T fMRI is now out in <a style="text-decoration: none;" href="https://academic.oup.com/cercor/advance-article/doi/10.1093/cercor/bhaa329/6025502"> Cerebral Cortex! </a></P>
<P class="blocktext"> 4/2020: I was awarded the National Defense Science and Engineering Graduate Fellowship to pursue work on <a style="text-decoration: none;" href="https://youtu.be/IfWYcjTKRFA">visual attention in virtual reality.</a> </P>
<h2> preprints </h2>
<hr class="striped-border">
<P class="blocktext2"> <a href="https://osf.io/preprints/psyarxiv/k8b9s?view_only="> <img src="images/cvpr_figure.png" alt="" class="left"> </a><B> <a style="text-decoration: none;" href=https://osf.io/preprints/psyarxiv/k8b9s?view_only=">Objects in focus: How object spatial probability underscores eye movement patterns</B> </a> <br>
<I> Elizabeth H. Hall, Zoe Loh, John M. Henderson</I> <br>
PsyArXiv, 2024. <a style="text-decoration: none;" href="https://github.com/ehhall/objects-in-focus"> Github </a> <br>
A paper documenting our process to segment 2.8k objects across 100 real-world scenes! We share our thoughts on the "best way" to segment objects, and analyses showing that image size and perspective has a big impact on the distribution of fixations. Full tutorial coming soon! </P>
<P class="blocktext2"> <a href="https://osf.io/preprints/psyarxiv/72np4?view_only="> <img src="images/grandTheft.png" alt="" class="left"> </a><B> <a style="text-decoration: none;" href=https://osf.io/preprints/psyarxiv/72np4?view_only=">Eye gaze during route learning in a virtual task</B> </a> <br>
<I> Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng</I> <br>
*co-first authors <br>
PsyArXiv, 2024. <br>
We tracked eye movements while viewers studied an avatar navigating in Grand Theft Auto V. We trained a classifier to determine whether they learned the route in natural or scrambled order. Natural viewers preferred to attend to the path ahead, while scrambled viewers focused more on landmark buildings and signs. </P>
<h2> publications </h2>
<hr class="striped-border">
<P class="blocktext2"> <a href="https://rdcu.be/dHWZk "> <img src="images/boundary.png" alt="boundary" class="left"> </a><B> <a style="text-decoration: none;" href="https://rdcu.be/dHWZk ">Object-based attention during scene perception elicits boundary contraction in memory</B> </a> <br>
<I> Elizabeth H. Hall, Joy J. Geng</I> <br>
Memory & Cognition, 2024. <a style="text-decoration: none;" href="https://github.com/ehhall/object-based-memories"> Code </a> <a style="text-decoration: none;" href="https://osf.io/mkas7/"> Data </a> <br>
We found that attending to small objects in scenes lead to significantly more boundary contraction in memory, even when other image properties were kept constant. This supports the idea that the extension/contraction in memory may reflect a bias towards an optimal viewing distance!</P>
<P class="blocktext2"> <a href="https://link.springer.com/article/10.3758/s13423-023-02286-2"> <img src="images/candace.jpg" alt="candace" class="left"> </a><B> <a style="text-decoration: none;" href="https://link.springer.com/article/10.3758/s13423-023-02286-2">Objects are selected for attention based upon meaning during passive scene viewing</B> </a> <br>
<I> Candace Peacock*, Elizabeth H. Hall*, John M. Henderson</I> <br>
*co-first authors <br>
Psychonomic Bulletin & Review, 2023. <a style="text-decoration: none;" href="https://osf.io/preprints/psyarxiv/fqtvx"> Preprint </a> <a style="text-decoration: none;" href="https://osf.io/egry6/"> Stimuli </a> <br>
We looked at whether fixations were more likely to land on high-meaning objects in scenes. We found that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of object salience.</P>
<P class="blocktext2"> <a href="https://www.nature.com/articles/s41597-022-01695-7"> <img src="images/dMRI.JPG" alt="dMRI" class="left"> </a><B> <a style="text-decoration: none;" href="https://www.nature.com/articles/s41597-022-01695-7">An analysis-ready and quality controlled resource for pediatric brain white-matter research </B> </a> <br>
<I> Adam Richie-Halford, Matthew Cieslak, Fibr Community Science Consortium </I> <br>
Scientific Data, 2022. <br>
An open-source dataset on brain white matter from 2700 New York city area children. I helped score the quality of diffusion MRI data, along with over 130 other community scientists. </P>
<P class="blocktext2"> <a href="https://link.springer.com/article/10.1007/s00426-022-01694-8"> <img src="images/zoe.JPG" alt="zoe" class="left"> </a><B> <a style="text-decoration: none;" href="https://link.springer.com/article/10.1007/s00426-022-01694-8">Working memory control predicts fixation duration in scene-viewing
</B> </a> <br>
<I> Zoe Loh*, Elizabeth H. Hall, Deborah A. Cronin, John M. Henderson</I> <br>
*undergrad supervised by me <br>
Psychological Research, 2022. <br>
We fit scene-viewing fixation data to a Ex-Guassian distribution to look at individual differences in memory. We found that the worse a participant's memory control was, the more likely they were to have some very long fixations when encoding scene detail into memory.</P>
<P class="blocktext2"> <a href="https://www.tandfonline.com/doi/full/10.1080/09658211.2021.2010761#.YcDAavk9X3E.twitter"> <img src="images/multicat.jpg" alt="multicat" class="left"> </a><B> <a style="text-decoration: none;" href="https://www.tandfonline.com/doi/full/10.1080/09658211.2021.2010761#.YcDAavk9X3E.twitter">Highly similar and competing visual scenes lead to diminished object but not spatial detail in memory drawings </B> </a> <br>
<I> Elizabeth H. Hall, Wilma A. Bainbridge, Chris I. Baker </I> <br>
Memory, 2021. <a style="text-decoration: none;" href="https://psyarxiv.com/2az8x"> Preprint </a> <a style="text-decoration: none;" href="https://osf.io/syvjr/"> Data </a> <br>
We investigated the detail and errors participants can have in memory when having to recall multiple, similar scenes. We found that memory drawings of "competing" scenes have diminished object detail, but are surpisingly still fairly spatially accurate.</P>
<P class="blocktext2"> <a href="https://academic.oup.com/cercor/advance-article/doi/10.1093/cercor/bhaa329/6025502"> <img src="images/hipporecall.jpg" alt="hipporecall" class="left"> </a><B> <a style="text-decoration: none;" href="https://academic.oup.com/cercor/advance-article/doi/10.1093/cercor/bhaa329/6025502">Distinct representational structure and localization for visual encoding and recall during visual imagery </B> </a> <br>
<I> Wilma A. Bainbridge, Elizabeth H. Hall, Chris I. Baker </I> <br>
Cerebral Cortex, 2020. <br>
We found that representations of memory content during recall show key differences from encoding in granularity of detail & spatial distribution. We also replicated the finding that brain regions involved in scene memory are interior to those involved in scene perception. See this<a style="text-decoration: none;" href="https://www.quantamagazine.org/new-map-of-meaning-in-the-brain-changes-ideas-about-memory-20220208/"> article </a> from Quanta for more on this idea! </P>
<P class="blocktext2"> <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02915/full"> <img src="images/frontiers.jpg" alt="frontiers" class="left"> </a><B> <a style="text-decoration: none;" href="https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02915/full">Eye Movements in Real-World Scene Photographs: General Characteristics and Effects of Viewing Task</B> </a> <br>
<I> Deborah Cronin, Elizabeth Hall, Jessica Goold, Taylor Hayes, John Henderson </I> <br>
Frontiers in Psychology, 2020. <br>
We examined effects of viewing task on when and where the eyes move in real-world scenes during memorization and an aesthetic judgment tasks. Distribution-level analyses revealed significant task-driven differences in eye movement behavior. </P>
<P class="blocktext2"> <a href="https://www.nature.com/articles/s41467-018-07830-6"> <img src="images/memrecall.png" alt="memrecall" class="left"> </a><B> <a style="text-decoration: none;" href="https://www.nature.com/articles/s41467-018-07830-6">Drawings of real-world scenes during free recall reveal detailed object and spatial information in memory </B> </a> <br>
<I> Wilma A. Bainbridge, Elizabeth H. Hall, Chris I. Baker </I> <br>
Nature Communications, 2019.
<a style="text-decoration: none;" href="https://www.wilmabainbridge.com/memorydrawings.html"> Data </a> <br>
Participants studied 30 scenes and drew as many images in as much detail as possible from memory. The resulting memory-based drawings were scored by thousands of online observers, revealing numerous objects, few memory intrusions, and precise spatial information. See this<a style="text-decoration: none;" href="https://www.scientificamerican.com/article/our-memory-is-even-better-than-experts-thought/#:~:text=In%20a%20recent%20study%20at,be%20only%2040%20percent%20correct."> article </a> from Scientific American for more! </P>
</body>
</html>