Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

To do: recode all looking/eye-tracking #22

Open
lottiegasp opened this issue Nov 11, 2020 · 6 comments
Open

To do: recode all looking/eye-tracking #22

lottiegasp opened this issue Nov 11, 2020 · 6 comments
Assignees
Labels
enhancement New feature or request

Comments

@lottiegasp
Copy link
Collaborator

in spec.yaml and datasets, change all response_mode=eye-tracking to gaze_manual or gaze_automatic

@lottiegasp lottiegasp self-assigned this Nov 11, 2020
@christinabergmann
Copy link
Collaborator

Hi, this is an issue in the video-based gaze following dataset contributed by @priyasilverstein. To not lose the encoded info, we decided to already implement this distinction, is it able to pass validation?

@lottiegasp
Copy link
Collaborator Author

lottiegasp commented Jan 21, 2021

@christinabergmann perhaps could we add a column called response_mode2 which is a copy of response_mode, and in this column have the levels gaze_manual and gaze_automatic. And in the original response_mode column just have eye-tracking so that it passes the validation.

See columns J and K in the langdiscrim dataset

Eventually I will go through all the datasets and do the same, then when they are all complete I will delete the column response_mode, rename response_mode2 to response_mode, and change spec.yaml so the validator accepts gaze_manual and gaze_automatic but not eye-tracking

Does that sound okay?

@lottiegasp
Copy link
Collaborator Author

lottiegasp commented Jan 21, 2021

Datasets to switch from response_mode = eye-tracking to manual or gaze_manual and gaze_automatic"

  • Abstract rule learning
  • Categorization bias
  • Cross-sit word learning
  • Familiar word rec
  • Function word seg
  • Gaze following (live)
  • Gaze following (video)
  • IDS pref
  • Label advantage concept learn
  • Lang discrim and pref
  • Mispronunci sensi
  • Mutual exclus
  • Natural speech pref
  • Online word rec
  • Phonotac learn
  • Point and vocab current
  • Point and vocab longi
  • Prosocial agents
  • Simple arithm competen
  • Sound symbolism
  • Statistical sound cat learn
  • Statistical word seg
  • Switch task
  • Symbolic play
  • Syntactic boot
  • Video deficit
  • Vowel discrim (nat)
  • Vowel discrim (non-nat)
  • Word seg (behav)
  • Word seg (neuro)

@lottiegasp
Copy link
Collaborator Author

PS Does it seem reasonable for me to contact all curators and ask them to make these changes to gaze_auto and gaze_manual as far as they have capacity and then I will finish off the rest for them? From my knowledge of my own dataset, I can imagine many might already know off the top of their heads in many cases so it would save a lot of time

@shotsuji
Copy link
Collaborator

shotsuji commented Jan 21, 2021 via email

@priyasilverstein
Copy link

priyasilverstein commented Jan 21, 2021 via email

@lottiegasp lottiegasp added the enhancement New feature or request label Feb 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants