Skip to content

Commit

Permalink
Merge branch 'master' into tag_aligner_alab
Browse files Browse the repository at this point in the history
  • Loading branch information
rennis250 authored Jun 5, 2024
2 parents 285086e + 30920c4 commit 8814136
Show file tree
Hide file tree
Showing 18 changed files with 160 additions and 11 deletions.
1 change: 1 addition & 0 deletions alpha-lab/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ let theme_config_additions = {
{ text: "Map Gaze Onto a 3D Model of an Environment", link: "/nerfs/" },
{ text: "Map Gaze Into a User-Supplied 3D Model", link: "/tag-aligner/" },
{ text: "Map Gaze Onto Facial Landmarks", link: "/gaze-on-face/" },
{ text: "Map Gaze Onto Website AOIs", link: "/web-aois/" },
],
},
{
Expand Down
Binary file added alpha-lab/public/web-aoi.webp
Binary file not shown.
Binary file added alpha-lab/web-aois/heatmap-gazes-overlaid.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added alpha-lab/web-aois/heatmap_output.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
115 changes: 115 additions & 0 deletions alpha-lab/web-aois/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
---
title: Map Gaze Onto Website AOIs

Check warning on line 2 in alpha-lab/web-aois/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (AOIs)
description: "Define areas of interest on a website and map gaze onto them using our Web-AOI tool. "
permalink: /alpha-lab/web-aois

Check warning on line 4 in alpha-lab/web-aois/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (aois)
meta:
- name: twitter:card
content: player
- name: twitter:image
content: "https://i.ytimg.com/vi/1yJfhtdJoMA/maxresdefault.jpg"
- name: twitter:player
content: "https://www.youtube.com/embed/DzK055NbRPM"
- name: twitter:width
content: "1280"
- name: twitter:height
content: "720"
- property: og:image
content: "https://i.ytimg.com/vi/1yJfhtdJoMA/maxresdefault.jpg"
tags: [Neon]
---

<script setup>
import TagLinks from '@components/TagLinks.vue'
</script>

# Map Gaze Onto Website AOIs

<TagLinks :tags="$frontmatter.tags" />

Check warning on line 27 in alpha-lab/web-aois/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (frontmatter)

<Youtube src="DzK055NbRPM"/>

::: tip
Want to see your website through your users' eyes? Discover what really captures their attention as they scroll using our website AOI Tool + Neon eye tracking!
:::

## Understanding Gaze Patterns in Web Interaction

Understanding where people focus their attention on websites is key to optimizing user interfaces and improving user experiences, whether individuals are making decisions about online shopping or simply browsing content. This knowledge is also useful for educational technology, commonly referred to as EdTech. Consider interactive displays in classrooms or online learning platforms—grasping how users interact with these tools is fundamental for enhancing learning outcomes.

By gathering data on gaze patterns during these interactions, designs can be tailored to prioritize user needs. This results in websites that are easier to navigate, applications with better user interfaces, online stores that offer more intuitive shopping experiences, and EdTech applications that foster more effective learning environments.

In this guide, we will introduce a desktop application featuring an integrated browser that incorporates AprilTag markers onto webpages. This tool integrates with Neon, facilitating the recording and visualization of gaze data mapped onto areas of interest on a website.

## Introducing Web Gaze Mapping

Pupil Cloud offers powerful tools like the [Marker Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/) and [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichments, which enable users to map gaze onto areas of interest. However, they do not provide a turnkey solution for defining AOIs on a website and importantly, maintaining gaze mapping even during scrolling—a behavior typical of regular website usage.

By following this guide, you can easily define AOIs on websites of your choice and record Neon data. Importantly, with this tool, the AOIs are not lost as you scroll. Afterward, you'll receive gaze mapping for each AOI, including individual AOI heatmaps and a full-page heatmap.

Check warning on line 47 in alpha-lab/web-aois/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (heatmaps)

### How Does This Tool Work?

We leverage [Playwright](https://playwright.dev/), an open-source automation library for browser testing and web scraping, alongside AprilTags automatically placed on the webpage within the browser interface. Through Playwright, we generate AOIs using selectable web elements, while the AprilTags facilitate the real-time transformation of gaze data from *scene-camera* to *screen-based* coordinates. For a deeper understanding of this transformation, refer to [the documentation](https://docs.pupil-labs.com/alpha-lab/gaze-contingency-assistive/#how-to-use-a-head-mounted-eye-tracker-for-screen-based-interaction).

## Steps To Recreate

Explore the [GitHub repository](https://github.com/pupil-labs/web-aois) and follow these simple steps:

1. Define Areas of Interest (AOIs) on your selected website.
2. Put Neon and start collecting data.
3. Process your recording to generate CSV files with gaze mapped onto AOI coordinates.
4. Visualize the data with customized heatmaps.

Check warning on line 60 in alpha-lab/web-aois/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (heatmaps)

## Visualize Mapped Gaze Data

After running the code, new files will be generated.

- For every AOI, you will get `png` files with transparent and overlaid heatmaps and csv files with the gaze data mapped on the AOI coordinates.
- A transparent and overlaid heatmap will also be provided for the full page (e.g., see below), along with a `gaze.csv` file that will include the mapped gaze data to the webpage in full.

<div style="margin-top: 20px;"></div>

<div class="grid grid-cols-2 gap-4">
<div class="image-column">
<img src="./heatmap_output.png" alt="Heatmap generated for one AOI" style="height: 300px; object-fit: cover;">
</div>
<div class="image-column">
<img src="./heatmap-gazes-overlaid.png" alt="Heatmap generated over the page" style="height: 300px; object-fit: cover;">
</div>
</div>
<font size=2><b></b> Heatmaps generated for one AOI (left) and for the entire page (right).</font>

You can find a detailed list of the outputs [in this section of our Github repository](https://github.com/pupil-labs/web-aois?tab=readme-ov-file#output-files).

<!-- ![Heatmap generated from Web AOI tool](./heatmap_output.png) -->

This data can be used to generate outcome metrics like time to first gaze (e.g., how long it took for the user to gaze at each AOI for the first time) or dwell time/total gaze duration (e.g., sum of the gaze sample durations, defined as the period between the timestamps of consecutive gaze samples).

::: tip
Need guidance in calculating even more metrics for your website AOIs? Reach out to us [by email](mailto:[email protected]), on our [Discord server](https://pupil-labs.com/chat/), or visit our [Support Page](https://pupil-labs.com/products/support/) for dedicated support options.
:::

<style scoped>
.mcontainer{
display: flex;
flex-wrap: wrap;
}
.col-mcontainer{
flex: 50%;
padding: 0 4px;
}
@media screen and (min-width: 1025px) and (max-width: 1200px) {
.col-mcontainer{
flex: 100%;
}
}
@media screen and (max-width: 800px) {
.col-mcontainer{
flex: 50%;
}
}
@media screen and (max-width: 400px) {
.col-mcontainer{
flex: 100%;
}
}
</style>
4 changes: 2 additions & 2 deletions neon/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,8 @@ let theme_config_additions = {
link: "/data-collection/offset-correction/",
},
{
text: "Backlight Compensation",
link: "/data-collection/backlight-compensation/",
text: "Scene Camera Exposure",
link: "/data-collection/scene-camera-exposure/",
},
],
},
Expand Down
2 changes: 0 additions & 2 deletions neon/data-collection/backlight-compensation/index.md

This file was deleted.

8 changes: 4 additions & 4 deletions neon/data-collection/data-streams/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ The Neon Companion app can provide gaze data in real-time at up to 200 Hz. Gaze

![Gaze](./gaze.jpg)

The achieved framerate can vary based on what Companion device is used and environmental conditions. On the OnePlus 10, the full 200 Hz can generally be achieved outside of especially hot environments. On the OnePlus 8, the framerate typically drops to ~120 Hz within a few minutes of starting a recording. Other apps running simultaneously on the phone may decrease the framerate.
The achieved framerate can vary based on what Companion device is used and environmental conditions. On the OnePlus 10 and Motorola Edge 40 Pro, the full 200 Hz can generally be achieved outside of especially hot environments. On the OnePlus 8, the framerate typically drops to ~120 Hz within a few minutes of starting a recording. Other apps running simultaneously on the phone may decrease the framerate.

After a recording is uploaded to Pupil Cloud, gaze data is automatically re-computed at the full 200 Hz framerat and can be downloaded from there.
After a recording is uploaded to Pupil Cloud, gaze data is automatically re-computed at the full 200 Hz framerate and can be downloaded from there.

The gaze estimation algorithm is based on end-2-end deep learning and provides gaze data robustly without requiring a calibration. We are currently working on a white paper that thoroughly evaluated the algorithm and will link it here once it is published.
The gaze estimation algorithm is based on end-2-end deep learning and provides gaze data robustly without requiring a calibration. You can find a high-level description as well as a thorough evaluation of the accuracy and robustness of the algorithm in our [white paper](https://zenodo.org/doi/10.5281/zenodo.10420388).

## Fixations & Saccades

Expand All @@ -31,7 +31,7 @@ The two primary types of eye movements exhibited by the visual system are fixati

![Fixations](./fixations.jpg)

Fixations and saccades are calculated automatically in Pupil Cloud after uploading a recording and are included in the recording downloads. The deployed fixation detection algorithm was specifically designed for head-mounted eye trackers and offers increased robustness in the presence of head movements. Especially movements due to vestibulo-ocular reflex are compensated for, which is not the case for most other fixation detection algorithms. You can learn more about it in the [Pupil Labs fixation detector whitepaper](https://docs.google.com/document/d/1dTL1VS83F-W1AZfbG-EogYwq2PFk463HqwGgshK3yJE/export?format=pdf) and in our [publication](https://link.springer.com/article/10.3758/s13428-024-02360-0) in *Behavior Research Methods* discussing fixation detection strategies.
Fixations and saccades are calculated automatically in Pupil Cloud after uploading a recording and are included in the recording downloads. The deployed fixation detection algorithm was specifically designed for head-mounted eye trackers and offers increased robustness in the presence of head movements. Especially movements due to vestibulo-ocular reflex are compensated for, which is not the case for most other fixation detection algorithms. You can learn more about it in the [Pupil Labs fixation detector whitepaper](https://docs.google.com/document/d/1CZnjyg4P83QSkfHi_bjwSceWCTWvlVtbGWtuyajv5Jc/export?format=pdf) and in our [publication](https://link.springer.com/article/10.3758/s13428-024-02360-0) in *Behavior Research Methods* discussing fixation detection strategies.

We detect saccades based on the fixation results, considering the gaps between fixations to be saccades. Note, that this assumption is only true in the absence of smooth pursuit eye movements. Additionally, the fixation detector does not compensate for blinks, which can cause a break in a fixation and thus introduce a false saccade.

Expand Down
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
32 changes: 32 additions & 0 deletions neon/data-collection/scene-camera-exposure/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Scene Camera Exposure
The [scene camera’s](https://docs.pupil-labs.com/neon/data-collection/data-streams/#scene-video) exposure can be adjusted to improve image quality in different lighting conditions. There are four modes:

- **Manual:** This mode lets you set the exposure time manually.
- **Automatic**: `Highlights`, `Balanced`, and `Shadows` automatically adjust exposure according to the surrounding lighting.

::: tip
The mode you choose should depend on the lighting conditions in your environment. The images below provide some
examples and important considerations.
:::

## Changing Exposure Modes
From the home screen of the Neon Companion app, tap
the [Scene and Eye Camera preview](https://docs.pupil-labs.com/neon/data-collection/first-recording/#_4-open-the-live-preview),
and then select `Balanced` to reveal all four modes.

## Manual Exposure Mode
Allows you to set the exposure time between 1 ms and 1000 ms.

::: tip
Exposure duration is inversely related to camera frame rate. Exposure values above 330 ms will reduce the scene camera rate below 30fps.
:::

## Automatic Exposure Modes
`Highlights`- optimizes the exposure to capture bright areas in the environment, while potentially underexposing dark areas.
![This mode optimizes the exposure to capture bright areas in the environment, while potentially underexposing dark areas.](Highlight.webp)

`Balanced` - optimizes the exposure to capture brighter and darker areas equally.
![This mode optimizes the exposure to capture brighter and darker areas in the environment equally.](./Balance.webp)

`Shadows` - optimizes the exposure to capture darker areas in the environment, while potentially overexposing brighter areas.
![This mode optimizes the exposure to capture darker areas in the environment, while potentially overexposing bright areas.](Shadow.webp)
2 changes: 1 addition & 1 deletion neon/hardware/compatible-devices/index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Companion Device
The Companion device is a flagship Android smartphone. It is a regular phone that is not customized or modified in any way. To ensure maximum stability and performance we can only support a small number of carefully selected and tested models. The Neon Companion app is tuned to work with these particular models as we require full control over various low-level functions of the hardware.

The supported models are: OnePlus 8, OnePlus 8T, OnePlus 10 Pro, and Motorola Edge 40 Pro. Currently, Neon ships with a Motorola Edge 40 Pro.
The supported models are: OnePlus 8, OnePlus 8T, OnePlus 10 Pro, and Motorola Edge 40 Pro. Currently, Neon ships with a Motorola Edge 40 Pro. We highly recommend the Edge 40 Pro, giving you the best performance, endurance and stability.

If you want to replace or add an extra Companion device you can purchase it [directly from us](https://pupil-labs.com/products/neon) or from any other distributor. The Neon Companion app is free and can be downloaded from the [Play Store](https://play.google.com/store/apps/details?id=com.pupillabs.neoncomp).

Expand Down
4 changes: 4 additions & 0 deletions neon/pupil-cloud/enrichments/reference-image-mapper/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,10 @@ This file contains all the mapped gaze data from all sections.
| **fixation id** | If this gaze sample belongs to a fixation event, this is the corresponding id of the fixation. Otherwise, this field is empty. |
| **blink id** | If this gaze samples belongs to a blink event, this is the corresponding id of the blink. Otherwise this field is empty. |

::: info
This CSV file only contains data-points where the reference image has been localised in the scene. Looking for all the gaze points? Check [this file.](/data-collection/data-format/#gaze-csv)
:::

### fixations.csv

This file contains fixation events detected in the gaze data stream and mapped to the reference image.
Expand Down
3 changes: 1 addition & 2 deletions neon/pupil-cloud/visualizations/areas-of-interest/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,5 @@ This file contains standard fixation and gaze metrics on AOIs.
| **average&nbsp;fixation&nbsp;duration&nbsp;[ms]** | Average fixation duration for the corresponding area of interest in milliseconds. |
| **total fixations** | Total number of fixations for the corresponding area of interest in milliseconds. |
| **time&nbsp;to&nbsp;first&nbsp;fixation&nbsp;[ms]** | Average time in milliseconds until the corresponding area of interest gets fixated on for the first time in a recording. |
| **time&nbsp;to&nbsp;first&nbsp;gaze&nbsp;[ms]** | Average time in milliseconds until the corresponding area of interest gets gazed at for the first time in a recording. |
| **total&nbsp;fixation&nbsp;duration&nbsp;[ms]** | Total fixation duration for the corresponding area of interest in milliseconds. |
| **total&nbsp;gaze&nbsp;duration&nbsp;[ms]** | Total fixation duration for the corresponding area of interest in milliseconds. |

0 comments on commit 8814136

Please sign in to comment.