Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support very large stimulus sets #13

Open
neuromusic opened this issue Feb 9, 2015 · 0 comments
Open

support very large stimulus sets #13

neuromusic opened this issue Feb 9, 2015 · 0 comments

Comments

@neuromusic
Copy link
Member

currently, there are two options for stimulus generation for behavior: "block" and "random". in both cases, a static stim file is read and the full stimulus set is read in and presented either in order (block) or shuffled, then presented (random). this is not optimal for large stimulus sets (over 1000 stimuli), as they take a very long time to load and impossible for very large stimulus sets (over 2000 stimuli).

some experiments may generate a large number of unique stimuli based upon a small number of experimental parameters (e.g. probabilistic sequence discriminations, psychophysical discriminations).

solutions:

  1. deal with it. generate blocks externally & only generate blocks < MAXSTIMS (however, fixing fix stim code collision for large stimulus sets #11 will be important)
  2. add a new "static block" stimulus selection protocol. rather than reading in an entire file, this protocol would simply hold onto the file handler, then read each line at the beginning of a trial. the block would terminate when we hit the end of the line
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant