BIDS app for decoding gaze position from the eyeball MR-signal using deepMReye (1).
To be used on preprocessed BIDS derivatives (e.g. fMRIprep outputs). No eye-tracking data required.
By default, bidsMReye uses a pre-trained version of deepMReye trained on 5 datasets incl. guided fixations (2), smooth pursuit (3,4,5) and free viewing (6). Other pretrained versions are optional. Dedicated model training is recommended.
The pipeline automatically extracts the eyeball voxels. This can be used also for other multivariate pattern analyses in the absence of eye-tracking data. Decoded gaze positions allow computing eye movements.
Some basic quality control and outliers detection is also performed:
- for each run
- at the group level
For more information, see the User Recommendations. If you have other questions, please reach out to the developer team.
Better to use the docker image as there are known install issues of deepmreye on Apple M1 for example.
docker build --tag cpplab/bidsmreye:latest --file docker/Dockerfile .
Pull the latest docker image:
docker pull cpplab/bidsmreye:latest
You can also get the package from pypi if you want.
pip install bidsmreye
NOT TESTED YET
To encapsulate bidsMReye in a virtual environment install with the following commands:
conda create --name bidsmreye python=3.10
conda activate bidsmreye
conda install pip
pip install bidsmreye
The tensorflow dependency supports both CPU and GPU instructions.
Note that you might need to install cudnn first
conda install -c conda-forge cudnn
If installation of ANTsPy fails try to manually install it via:
git clone https://github.com/ANTsX/ANTsPy
cd ANTsPy
pip install CMake
python3 setup.py install
Clone this repository.
git clone git://github.com/cpp-lln-lab/bidsmreye
Then install the package:
cd bidsMReye
make install_dev
bidsmreye requires your input fmri data:
- to be minimally preprocessed (at least realigned),
- with filenames and structure that conforms to a BIDS derivative dataset.
Two bids apps are available to generate those types of preprocessed data:
Obviousvly your fmri data must include the eyes of your participant for bidsmreye to work.
Type the following for more information:
bidsmreye --help
--action prepapre
means that bidsmreye will extract the data coming from the
eyes from the fMRI images.
If your data is not in MNI space, bidsmreye will also register the data to MNI.
bidsmreye --action prepare \
bids_dir \
output_dir \
participant
--action generalize
use the extracted timeseries to predict the eye movements
using the default pre-trained model of deepmreye.
This will also generate a quality control report of the decoded eye movements.
bidsmreye --action generalize \
bids_dir \
output_dir \
participant
--action all
does "prepare" then "generalize".
bidsmreye --action all \
bids_dir \
output_dir \
participant
bidsmreye --action qc
bids_dir
output_dir
group
Please look up the documentation
Thanks goes to these wonderful people (emoji key):
Pauline Cabee 💻 🤔 🚇 |
Remi Gau 💻 🤔 |
This project follows the all-contributors specification. Contributions of any kind welcome!
If you train deepMReye, or if you have eye-tracking training labels and the extracted eyeball voxels, consider sharing it to contribute to the pretrained model pool.