Skip to content

Commit

Permalink
Merge branch 'main' into documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
bclenet committed Oct 5, 2023
2 parents b6f21f4 + 8f12d3d commit 7c3b6df
Show file tree
Hide file tree
Showing 37 changed files with 1,593 additions and 930 deletions.
13 changes: 8 additions & 5 deletions .github/workflows/code_quality.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,11 @@ on:
push:
paths:
- '**.py'
- '.github/workflows/code_quality.yml'
pull_request:
paths:
- '**.py'
- '.github/workflows/code_quality.yml'

# Jobs that define the workflow
jobs:
Expand All @@ -33,22 +35,23 @@ jobs:
- uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-pylint
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
restore-keys: |
${{ runner.os }}-pip-pylint
${{ runner.os }}-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pylint
pip install .[tests]
- name: Analyse the code with pylint
run: |
pylint --exit-zero narps_open > pylint_report_narps_open.txt
pylint --exit-zero tests > pylint_report_tests.txt
pylint --fail-under 8 --ignore-paths narps_open/pipelines/ narps_open > pylint_report_narps_open.txt
pylint --fail-under 8 tests > pylint_report_tests.txt
- name: Archive pylint results
uses: actions/upload-artifact@v3
if: failure() # Only if previous step failed
with:
name: pylint-reports-python
path: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pipeline_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ jobs:
echo "tests=$test_files" >> $GITHUB_OUTPUT
echo "teams=$teams" >> $GITHUB_OUTPUT
# A job to identify and run the tests
# A job to run the tests
pytest:
needs: identify-tests
runs-on: self-hosted
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test_changes.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
echo $test_files
echo "tests=$test_files" >> $GITHUB_OUTPUT
# A job to list the tests to be run
# A job to run the tests
pytest:
needs: identify-tests
runs-on: self-hosted
Expand Down
91 changes: 47 additions & 44 deletions INSTALL.md
Original file line number Diff line number Diff line change
@@ -1,82 +1,85 @@
# How to install NARPS Open Pipelines ?

## 1 - Get the code
## 1 - Fork the repository

First, [fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository, so you have your own working copy of it.
[Fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository, so you have your own working copy of it.

Then, you have two options to [clone](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) the project :
## 2 - Clone the code

### Option 1: Using DataLad (recommended)
First, install [Datalad](https://www.datalad.org/). This will allow you to access the NARPS data easily, as it is included in the repository as [datalad subdatasets](http://handbook.datalad.org/en/latest/basics/101-106-nesting.html).

Cloning the fork using [Datalad](https://www.datalad.org/) will allow you to get the code as well as "links" to the data, because the NARPS data is bundled in this repository as [datalad subdatasets](http://handbook.datalad.org/en/latest/basics/101-106-nesting.html).
Then, [clone](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) the project :

```bash
# Replace YOUR_GITHUB_USERNAME in the following command.
datalad install --recursive https://github.com/YOUR_GITHUB_USERNAME/narps_open_pipelines.git
```

### Option 2: Using Git
> [!WARNING]
> It is still possible to clone the fork using [git](https://git-scm.com/) ; but by doing this, you will only get the code.
> ```bash
> # Replace YOUR_GITHUB_USERNAME in the following command.
> git clone https://github.com/YOUR_GITHUB_USERNAME/narps_open_pipelines.git
> ```
Cloning the fork using [git](https://git-scm.com/) ; by doing this, you will only get the code.
## 3 - Get the data
```bash
git clone https://github.com/YOUR_GITHUB_USERNAME/narps_open_pipelines.git
```

## 2 - Get the data
Now that you cloned the repository using Datalad, you are able to get the data :
Ignore this step if you used DataLad (option 1) in the previous step.

Otherwise, there are several ways to get the data.
```bash
# Move inside the root directory of the repository.
cd narps_open_pipelines
## 3 - Set up the environment
# Select the data you want to download. Here is an example to get data of the first 4 subjects.
datalad get data/original/ds001734/sub-00[1-4] -J 12
datalad get data/original/ds001734/derivatives/fmriprep/sub-00[1-4] -J 12
```
The Narps Open Pipelines project is build upon several dependencies, such as [Nipype](https://nipype.readthedocs.io/en/latest/) but also the original software packages used by the pipelines (SPM, FSL, AFNI...).
> [!NOTE]
> For further information and alternatives on how to get the data, see the corresponding documentation page [docs/data.md](docs/data.md).
To facilitate this step, we created a Docker container based on [Neurodocker](https://github.com/ReproNim/neurodocker) that contains the necessary Python packages and software. To install the Docker image, two options are available.
## 4 - Set up the environment

### Option 1: Using Dockerhub
[Install Docker](https://docs.docker.com/engine/install/) then pull the Docker image :

```bash
docker pull elodiegermani/open_pipeline:latest
```

The image should install itself. Once it's done you can check the image is available on your system:
Once it's done you can check the image is available on your system :

```bash
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/elodiegermani/open_pipeline latest 0f3c74d28406 9 months ago 22.7 GB
```

### Option 2: Using a Dockerfile
> [!NOTE]
> Feel free to read this documentation page [docs/environment.md](docs/environment.md) to get further information about this environment.
## 5 - Run the project

Start a Docker container from the Docker image :

```bash
# Replace PATH_TO_THE_REPOSITORY in the following command (e.g.: with /home/user/dev/narps_open_pipelines/)
docker run -it -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ elodiegermani/open_pipeline
```

The Dockerfile used to create the image stored on DockerHub is available at the root of the repository ([Dockerfile](Dockerfile)). But you might want to personalize this Dockerfile. To do so, change the command below that will generate a new Dockerfile:
Install NARPS Open Pipelines inside the container :

```bash
docker run --rm repronim/neurodocker:0.7.0 generate docker \
--base neurodebian:stretch-non-free --pkg-manager apt \
--install git \
--fsl version=6.0.3 \
--afni version=latest method=binaries install_r=true install_r_pkgs=true install_python2=true install_python3=true \
--spm12 version=r7771 method=binaries \
--user=neuro \
--workdir /home \
--miniconda create_env=neuro \
conda_install="python=3.8 traits jupyter nilearn graphviz nipype scikit-image" \
pip_install="matplotlib" \
activate=True \
--env LD_LIBRARY_PATH="/opt/miniconda-latest/envs/neuro:$LD_LIBRARY_PATH" \
--run-bash "source activate neuro" \
--user=root \
--run 'chmod 777 -Rf /home' \
--run 'chown -R neuro /home' \
--user=neuro \
--run 'mkdir -p ~/.jupyter && echo c.NotebookApp.ip = \"0.0.0.0\" > ~/.jupyter/jupyter_notebook_config.py' > Dockerfile
source activate neuro
cd /home/neuro/code/
pip install .
```

When you are satisfied with your Dockerfile, just build the image:
Finally, you are able to run pipelines :

```bash
docker build --tag [name_of_the_image] - < Dockerfile
python narps_open/runner.py
usage: runner.py [-h] -t TEAM (-r RSUBJECTS | -s SUBJECTS [SUBJECTS ...] | -n NSUBJECTS) [-g | -f] [-c]
```

When the image is built, follow the instructions in [docs/environment.md](docs/environment.md) to start the environment from it.
> [!NOTE]
> For further information, read this documentation page [docs/running.md](docs/running.md).
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,6 @@ This project is developed in the Empenn team by Boris Clenet, Elodie Germani, Je

In addition, this project was presented and received contributions during the following events:
- OHBM Brainhack 2022 (June 2022): Elodie Germani, Arshitha Basavaraj, Trang Cao, Rémi Gau, Anna Menacher, Camille Maumet.
- e-ReproNim FENS NENS Cluster Brainhack: <ADD_NAMES_HERE>
- OHBM Brainhack 2023 (July 2023): <ADD_NAMES_HERE>
- e-ReproNim FENS NENS Cluster Brainhack (June 2023) : Liz Bushby, Boris Clénet, Michael Dayan, Aimee Westbrook.
- OHBM Brainhack 2023 (July 2023): Arshitha Basavaraj, Boris Clénet, Rémi Gau, Élodie Germani, Yaroslav Halchenko, Camille Maumet, Paul Taylor.
- ORIGAMI lab hackathon (Sept 2023):
36 changes: 36 additions & 0 deletions docs/data.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,3 +94,39 @@ python narps_open/utils/results -r -t 2T6S C88N L1A8
The collections are also available [here](https://zenodo.org/record/3528329/) as one release on Zenodo that you can download.

Each team results collection is kept in the `data/results/orig` directory, in a folder using the pattern `<neurovault_collection_id>_<team_id>` (e.g.: `4881_2T6S` for the 2T6S team).

## Access NARPS data

Inside `narps_open.data`, several modules allow to parse data from the NARPS file, so it's easier to use it inside the Narps Open Pipelines project. These are :

### `narps_open.data.description`
Get textual description of the pipelines, as written by the teams (see [docs/description.md](/docs/description.md)).

### `narps_open.data.results`
Get the result collections, as described earlier in this file.

### `narps_open.data.participants`
Get the participants data (parses the `data/original/ds001734/participants.tsv` file) as well as participants subsets to perform analyses on lower numbers of images.

### `narps_open.data.task`
Get information about the task (parses the `data/original/ds001734/task-MGT_bold.json` file). Here is an example how to use it :

```python
from narps_open.data.task import TaskInformation

task_info = TaskInformation() # task_info is a dict

# All available keys
print(task_info.keys())
# dict_keys(['TaskName', 'Manufacturer', 'ManufacturersModelName', 'MagneticFieldStrength', 'RepetitionTime', 'EchoTime', 'FlipAngle', 'MultibandAccelerationFactor', 'EffectiveEchoSpacing', 'SliceTiming', 'BandwidthPerPixelPhaseEncode', 'PhaseEncodingDirection', 'TaskDescription', 'CogAtlasID', 'NumberOfSlices', 'AcquisitionTime', 'TotalReadoutTime'])

# Original data
print(task_info['TaskName'])
print(task_info['Manufacturer'])
print(task_info['RepetitionTime']) # And so on ...

# Derived data
print(task_info['NumberOfSlices'])
print(task_info['AcquisitionTime'])
print(task_info['TotalReadoutTime'])
```
45 changes: 43 additions & 2 deletions docs/description.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The file `narps_open/data/description/analysis_pipelines_derived_descriptions.ts

The class `TeamDescription` of module `narps_open.data.description` acts as a parser for these two files.

You can also use the command-line tool as so. Option `-t` is for the team id, option `-d` allows to print only one of the sub parts of the description among : `general`, `exclusions`, `preprocessing`, `analysis`, and `categorized_for_analysis`.
You can use the command-line tool as so. Option `-t` is for the team id, option `-d` allows to print only one of the sub parts of the description among : `general`, `exclusions`, `preprocessing`, `analysis`, and `categorized_for_analysis`. Options `--json` and `--md` allow to choose the export format you prefer between JSON and Markdown.

```bash
python narps_open/data/description -h
Expand All @@ -21,8 +21,25 @@ python narps_open/data/description -h
# -t TEAM, --team TEAM the team ID
# -d {general,exclusions,preprocessing,analysis,categorized_for_analysis,derived}, --dictionary {general,exclusions,preprocessing,analysis,categorized_for_analysis,derived}
# the sub dictionary of team description
# --json output team description as JSON
# --md output team description as Markdown

python narps_open/data/description -t 2T6S -d general
python narps_open/data/description -t 2T6S --json
# {
# "general.teamID": "2T6S",
# "general.NV_collection_link": "https://neurovault.org/collections/4881/",
# "general.results_comments": "NA",
# "general.preregistered": "No",
# "general.link_preregistration_form": "We did not pre-register our analysis.",
# "general.regions_definition": "We employed the pre-hypothesized brain regions (vmPFC, vSTR, and amygdala) from Barta, McGuire, and Kable (2010, Neuroimage). Specific MNI coordinates are:\nvmPFC: x = 2, y = 46, z = -8\nleft vSTR: x = -12, y = 12, z = -6, right vSTR = x = 12, y = 10, z = -6\n(right) Amygdala: x = 24, y = -4, z = -18",
# "general.softwares": "SPM12 , \nfmriprep 1.1.4",
# "exclusions.n_participants": "108",
# "exclusions.exclusions_details": "We did not exclude any participant in the analysis",
# "preprocessing.used_fmriprep_data": "Yes",
# "preprocessing.preprocessing_order": "We used the provided preprocessed data by fMRIPprep 1.1.4 (Esteban, Markiewicz, et al. (2018); Esteban, Blair, et al. (2018); RRID:SCR_016216), which is based on Nipype 1.1.1 (Gorgolewski et al. (2011); Gorgolewski et al. (2018); RRID:SCR_002502) and we additionally conducted a spatial smoothing using the provided preprocessed data set and SPM12. Here, we attach the preprocessing steps described in the provided data set. \nAnatomical data preprocessing\nThe T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) using N4BiasFieldCorrection (Tustison et al. 2010, ANTs 2.2.0), and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped using antsBrainExtraction.sh (ANTs 2.2.0), using OASIS as target template. Brain surfaces we
# ...

python narps_open/data/description -t 2T6S -d general --json
# {
# "teamID": "2T6S",
# "NV_collection_link": "https://neurovault.org/collections/4881/",
Expand All @@ -33,6 +50,30 @@ python narps_open/data/description -t 2T6S -d general
# "softwares": "SPM12 , \nfmriprep 1.1.4",
# "general_comments": "NA"
# }

python narps_open/data/description -t 2T6S --md
# # NARPS team description : 2T6S
# ## General
# * `teamID` : 2T6S
# * `NV_collection_link` : https://neurovault.org/collections/4881/
# * `results_comments` : NA
# * `preregistered` : No
# * `link_preregistration_form` : We did not pre-register our analysis.
# * `regions_definition` : We employed the pre-hypothesized brain regions (vmPFC, vSTR, and amygdala) from Barta, McGuire, and Kable (2010, Neuroimage). Specific MNI coordinates are:
# vmPFC: x = 2, y = 46, z = -8
# left vSTR: x = -12, y = 12, z = -6, right vSTR = x = 12, y = 10, z = -6
# (right) Amygdala: x = 24, y = -4, z = -18
# * `softwares` : SPM12 ,
# fmriprep 1.1.4
# * `general_comments` : NA
# ## Exclusions
# * `n_participants` : 108
# * `exclusions_details` : We did not exclude any participant in the analysis
# ## Preprocessing
# * `used_fmriprep_data` : Yes
# * `preprocessing_order` : We used the provided preprocessed data by fMRIPprep 1.1.4 (Esteban, Markiewicz, et al. (2018); Esteban, Blair, et al. (2018); RRID:SCR_016216), which is based on Nipype 1.1.1 (Gorgolewski et al. (2011); Gorgolewski et al. (2018); RRID:SCR_002502) and we additionally conducted a spatial smoothing using the provided preprocessed data set and SPM12. Here, we attach the preprocessing steps described in the provided data set.
# Anatomical data preprocessing
# ...
```

Of course the `narps_open.data.description` module is accessible programmatically, here is an example on how to use it:
Expand Down
Loading

0 comments on commit 7c3b6df

Please sign in to comment.