Skip to content

Commit

Permalink
Update alpha-lab/event-automation-gpt/index.md
Browse files Browse the repository at this point in the history
Co-authored-by: Will <[email protected]>
  • Loading branch information
N-M-T and willpatera authored Nov 15, 2024
1 parent 8570896 commit d1471b0
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion alpha-lab/event-automation-gpt/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ demonstrate how to build your own GPT-based personal annotation assistant!
In eye tracking research, analyzing recordings and identifying key moments—such as when users interact with specific
objects—has typically required a tedious, frame-by-frame review. This manual process is time-consuming and limits scalability.

In this article, we explore how automation can overcome these challenges. Using a Large Multimodal Model (GPT-4o), we
In this article, we explore how automation can overcome these challenges. Using a Large Multimodal Model (OpenAI's GPT-4o), we

Check warning on line 42 in alpha-lab/event-automation-gpt/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Multimodal)
experiment with prompts to detect specific actions, such as reaching for an object, or what features of the environment
were being gazed at, and automatically add the respective annotations to [Pupil Cloud](https://pupil-labs.com/products/cloud)
recordings via the Pupil Cloud API. While still in its early stages, this approach shows promise in making the annotation
Expand Down

0 comments on commit d1471b0

Please sign in to comment.