Skip to content

Commit

Permalink
demo version
Browse files Browse the repository at this point in the history
  • Loading branch information
yi-chia-chen committed Jul 17, 2024
1 parent 13c384c commit cc0bb3e
Show file tree
Hide file tree
Showing 3 changed files with 59 additions and 201 deletions.
56 changes: 8 additions & 48 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -21,65 +21,33 @@
<body>
<noscript>It seems that JavaScript is disabled in your browser. Please go to settings and enable JavaScript for this experiment.</noscript>

<div class='page-box' id='instr-box'>
<div class='page-box fixed-box' id='instr-box'>
<p id='instr-text'></p>
<img id='instr-img' src='media/blank.jpg' alt='Image Error: Please contact the experimenter at [email protected] with the phrase "IMG-ERROR".'></img>
<img id='instr-img' src='media/blank.jpg' alt='Image Error: Please contact the experimenter at XXX with the phrase "IMG-ERROR".'></img>
<audio id='sound-test-aud'>
<source src='media/metronome_80bpm_181beats_centered.mp3' preload='auto'/>
Audio Error: Please contact [email protected] and include your sona ID and the phrase "AUD-ERROR" to receive your credit.
Audio Error: Please contact XXX and include your sona ID and the phrase "AUD-ERROR" to receive your credit.
</audio>
<video class='instr-vid' id='game-demo-vid' preload='auto'>
<source src='media/game_demo.mp4' />
Video Error: Please contact [email protected] and include your sona ID and the phrase "VID-ERROR" to receive your credit.
Video Error: Please contact XXX and include your sona ID and the phrase "VID-ERROR" to receive your credit.
</video>
<video class='instr-vid' id='tempo-demo-vid' preload='auto'>
<source src='media/tempo_demo.mp4' />
Video Error: Please contact [email protected] and include your sona ID and the phrase "VID-ERROR" to receive your credit.
Video Error: Please contact XXX and include your sona ID and the phrase "VID-ERROR" to receive your credit.
</video>
<button class='button instr-button' id='sound-test-play-button'>PLAY</button>
<button class='button instr-button' id='again-button'>AGAIN</button>
<button class='button instr-button' id='next-button'>NEXT</button>
</div>

<!--======== INFORMED CONSENT FORM ========-->
<div class='page-box' id='consent-box'>
<p class='consent-instr'>To conduct this study, we need to include this information sheet:</p>
<div id='consent-text-box'>
<strong>INFORMATION SHEET</strong><br />
<strong>Action perception and prediction of single and multiple actor sequences</strong><br /><br />
You are invited to participate in a research study on human vision, conducted by Hongjing Lu, PhD, of the Dept. of Psychology at UCLA. You were selected as a possible participant in this study because you have normal or corrected vision.<br /><br />
<strong>Purpose of Study</strong><br />
The purpose of the study is to understand how we use visual information to detect, recognize and predict human action and action interactions.<br /><br />
<strong>Procedures Involved in Study</strong><br />
You will be presented with visual stimuli either involving human actions (e.g., walking, running, or two people interact with each other) or avatar movements (e.g., boat) on a computer screen. Your cursor movement positions during the experiment will be tracked. You may be asked to press one of two computer buttons to make responses. Your answers will be recorded by the computer. Your answers will not be identified with you in any way. You may also be given brief questionnaires regarding social attitudes, also assessing individual thoughts and feelings. The session will last a maximum of 60 minutes total.<br /><br />
<strong>Potential Risks and Discomforts</strong><br />
There is no known risk/discomfort in this study. If you experience any discomfort during the course of the experiment, you can stop at any time and notify the experimenter.<br /><br />
<strong>Potential Benefits to Subject</strong><br />
There are no direct benefits from participating in this study.<br /><br />
<strong>Potential Benefits to Society</strong><br />
The findings of the study will help the design of artificial visual systems for detecting and recognizing human action in a natural environment.<br /><br />
<strong>Payment for Participation</strong><br />
You will receive up to 1 hr of credit, through the UCLA Psychology Department Subject Pool.<br /><br />
<strong>Alternatives to Participation</strong><br />
An alternative to participating in this project and fulfilling Psychology 10 requirements is to participate in other research or to do an equivalent classroom project.<br /><br />
<strong>Confidentiality</strong><br />
Any information that is obtained in connection with this study and that can identify you will remain confidential. It will be disclosed only with your permission or as required by law. The results of the study, including laboratory or any other data, may be published for scientific purposes but will not give your name or include any identifiable references to you. Confidentiality will be maintained by storing all subject information securely in a locked drawer, or a password protected computer. Participants’ data will correspond to individual subject IDs, preventing identification to personal information (e.g names). Only the experimenters will have access to the data. No electronic identifiers will be associated with your data (e.g., IP address). There is always the possibility of tampering from an outside source when using the internet for collecting information. While the confidentiality of your responses will be protected once the data are downloaded from the internet, there is always a possibility of hacking or other security breaches that could threaten the confidentiality of your responses. Please know that you are free to decide not to answer any question.<br /><br />
<strong>Participation and Withdraw</strong><br />
Your participation is VOLUNTARY. You may withdraw from the experiment at any time without suffering any negative consequences.<br /><br />
<strong>Identification of Investigators</strong><br />
If you have any questions, feel free to contact Dr. Hongjing Lu ([email protected]), at the Department of Psychology, UCLA, 405 Hilgard Ave., Los Angeles, CA 90095-1563.<br /><br />
<strong>Rights of Research Subjects</strong><br />
You may withdraw your consent at any time and discontinue participation without penalty. You are not waiving any legal claims, rights or remedies because of your participation in this research study. If you wish to ask questions about your rights as a research participant or if you wish to voice any problems or concerns you may have about the study to someone other than the researchers, please call the Office of the Human Research Protection Program at (310) 206-2040; or by email: [email protected] or by mail: Box 951406, Los Angeles, CA 90095-1406.<br /><br />
</div>
</div>

<!--======== TASK ========-->
<div class='page-box' id='task-box'>
<audio id='metronome'>
<source src='media/metronome_80bpm_181beats_centered.mp3'>
</audio>
<img id='sinewave' src='media/line_customized.png' alt='Image Error: Please contact the experimenter at [email protected] with the phrase "IMG-ERROR".'></img>
<canvas id='canvas'>CanvasError: Please contact [email protected] and include your sona ID and the phrase "CANVAS-ERROR" to receive your credit.</canvas>
<img id='sinewave' src='media/line_customized.png' alt='Image Error: Please contact the experimenter at XXX with the phrase "IMG-ERROR".'></img>
<canvas id='canvas'>CanvasError: Please contact XXX and include your sona ID and the phrase "CANVAS-ERROR" to receive your credit.</canvas>
<div id='end-object'></div>
<div id='canvas-frame'></div>
<div id='start-mark'></div>
Expand Down Expand Up @@ -193,15 +161,7 @@
In the games you play, sometimes you see your own movements like what you'd see normally (e.g., you see how your hands move according to how you move it), but sometimes your visual experience deviate from what you command your hand to do and what you feel from the movements of your hand. How much you pick up on the mismatch and how much you perceptual system tolerate the mismatch determine how much weaker you experience the visual feedback as yourself.<br /><br />
What we were testing in these games was how the yellow ball's motion influenced your tolerance for the mismatch. Sometimes the ball appeared to be causally "launched" by the cursor, in a physical manner. Critically, sometimes it appeared to suddenly start moving on it's own when the cursor get near, as if it is avoiding the cursor as in a social interaction. Since we often think of ourselves and our actions in the social terms, compared to the physical terms, we are testing if this tendency translate to different levels of tolerance for the mismatch between what you see and what you feel from your hand movements.<br /><br />

Please don't hesitate to contact the researcher if you have any questions:<br /><br />
Akila Kadambi<br />
UCLA Computational Vision and Learning Laboratory<br />
<a href='https://ycc.vision/' target='_blank' rel='noopener noreferrer'>https://ycc.vision/</a><br />
[email protected]<br /><br /><br />

<strong>Here's some additional information about the core project this study is associated with:</strong><br /><br />
We are interested in understanding how people are able to perceive human actions in a visual scene, and how this information is processed to enable them to make decisions about the human body movement. By showing moving patterns at various speeds and in various combinations, we are able to determine whether different cues make it harder or easier for you to make judgments about the human movement in the target display.<br />
If you have any questions, please feel free to contact the experimenter, Hongjing Lu at [email protected].
Please don't hesitate to contact the researcher if you have any questions.
</div>
</body>

Expand Down
Loading

0 comments on commit cc0bb3e

Please sign in to comment.