Skip to content

Commit

Permalink
Merge branch 'Inria-Empenn:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
bclenet authored Sep 29, 2023
2 parents 6ebe5d2 + f9fecea commit 0a584dd
Show file tree
Hide file tree
Showing 18 changed files with 448 additions and 107 deletions.
45 changes: 43 additions & 2 deletions docs/description.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The file `narps_open/data/description/analysis_pipelines_derived_descriptions.ts

The class `TeamDescription` of module `narps_open.data.description` acts as a parser for these two files.

You can also use the command-line tool as so. Option `-t` is for the team id, option `-d` allows to print only one of the sub parts of the description among : `general`, `exclusions`, `preprocessing`, `analysis`, and `categorized_for_analysis`.
You can use the command-line tool as so. Option `-t` is for the team id, option `-d` allows to print only one of the sub parts of the description among : `general`, `exclusions`, `preprocessing`, `analysis`, and `categorized_for_analysis`. Options `--json` and `--md` allow to choose the export format you prefer between JSON and Markdown.

```bash
python narps_open/data/description -h
Expand All @@ -21,8 +21,25 @@ python narps_open/data/description -h
# -t TEAM, --team TEAM the team ID
# -d {general,exclusions,preprocessing,analysis,categorized_for_analysis,derived}, --dictionary {general,exclusions,preprocessing,analysis,categorized_for_analysis,derived}
# the sub dictionary of team description
# --json output team description as JSON
# --md output team description as Markdown

python narps_open/data/description -t 2T6S -d general
python narps_open/data/description -t 2T6S --json
# {
# "general.teamID": "2T6S",
# "general.NV_collection_link": "https://neurovault.org/collections/4881/",
# "general.results_comments": "NA",
# "general.preregistered": "No",
# "general.link_preregistration_form": "We did not pre-register our analysis.",
# "general.regions_definition": "We employed the pre-hypothesized brain regions (vmPFC, vSTR, and amygdala) from Barta, McGuire, and Kable (2010, Neuroimage). Specific MNI coordinates are:\nvmPFC: x = 2, y = 46, z = -8\nleft vSTR: x = -12, y = 12, z = -6, right vSTR = x = 12, y = 10, z = -6\n(right) Amygdala: x = 24, y = -4, z = -18",
# "general.softwares": "SPM12 , \nfmriprep 1.1.4",
# "exclusions.n_participants": "108",
# "exclusions.exclusions_details": "We did not exclude any participant in the analysis",
# "preprocessing.used_fmriprep_data": "Yes",
# "preprocessing.preprocessing_order": "We used the provided preprocessed data by fMRIPprep 1.1.4 (Esteban, Markiewicz, et al. (2018); Esteban, Blair, et al. (2018); RRID:SCR_016216), which is based on Nipype 1.1.1 (Gorgolewski et al. (2011); Gorgolewski et al. (2018); RRID:SCR_002502) and we additionally conducted a spatial smoothing using the provided preprocessed data set and SPM12. Here, we attach the preprocessing steps described in the provided data set. \nAnatomical data preprocessing\nThe T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) using N4BiasFieldCorrection (Tustison et al. 2010, ANTs 2.2.0), and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped using antsBrainExtraction.sh (ANTs 2.2.0), using OASIS as target template. Brain surfaces we
# ...

python narps_open/data/description -t 2T6S -d general --json
# {
# "teamID": "2T6S",
# "NV_collection_link": "https://neurovault.org/collections/4881/",
Expand All @@ -33,6 +50,30 @@ python narps_open/data/description -t 2T6S -d general
# "softwares": "SPM12 , \nfmriprep 1.1.4",
# "general_comments": "NA"
# }

python narps_open/data/description -t 2T6S --md
# # NARPS team description : 2T6S
# ## General
# * `teamID` : 2T6S
# * `NV_collection_link` : https://neurovault.org/collections/4881/
# * `results_comments` : NA
# * `preregistered` : No
# * `link_preregistration_form` : We did not pre-register our analysis.
# * `regions_definition` : We employed the pre-hypothesized brain regions (vmPFC, vSTR, and amygdala) from Barta, McGuire, and Kable (2010, Neuroimage). Specific MNI coordinates are:
# vmPFC: x = 2, y = 46, z = -8
# left vSTR: x = -12, y = 12, z = -6, right vSTR = x = 12, y = 10, z = -6
# (right) Amygdala: x = 24, y = -4, z = -18
# * `softwares` : SPM12 ,
# fmriprep 1.1.4
# * `general_comments` : NA
# ## Exclusions
# * `n_participants` : 108
# * `exclusions_details` : We did not exclude any participant in the analysis
# ## Preprocessing
# * `used_fmriprep_data` : Yes
# * `preprocessing_order` : We used the provided preprocessed data by fMRIPprep 1.1.4 (Esteban, Markiewicz, et al. (2018); Esteban, Blair, et al. (2018); RRID:SCR_016216), which is based on Nipype 1.1.1 (Gorgolewski et al. (2011); Gorgolewski et al. (2018); RRID:SCR_002502) and we additionally conducted a spatial smoothing using the provided preprocessed data set and SPM12. Here, we attach the preprocessing steps described in the provided data set.
# Anatomical data preprocessing
# ...
```

Of course the `narps_open.data.description` module is accessible programmatically, here is an example on how to use it:
Expand Down
33 changes: 33 additions & 0 deletions narps_open/data/description/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

from os.path import join
from csv import DictReader
from json import dumps
from importlib_resources import files

class TeamDescription(dict):
Expand All @@ -25,6 +26,9 @@ def __init__(self, team_id):
self.team_id = team_id
self._load()

def __str__(self):
return dumps(self, indent = 4)

@property
def general(self) -> dict:
""" Getter for the sub dictionary general """
Expand Down Expand Up @@ -55,6 +59,35 @@ def derived(self) -> dict:
""" Getter for the sub dictionary containing derived team description """
return self._get_sub_dict('derived')

def markdown(self):
""" Return the team description as a string formatted in markdown """
return_string = f'# NARPS team description : {self.team_id}\n'

dictionaries = [
self.general,
self.exclusions,
self.preprocessing,
self.analysis,
self.categorized_for_analysis,
self.derived
]

names = [
'General',
'Exclusions',
'Preprocessing',
'Analysis',
'Categorized for analysis',
'Derived'
]

for dictionary, name in zip(dictionaries, names):
return_string += f'## {name}\n'
for key in dictionary:
return_string += f'* `{key}` : {dictionary[key]}\n'

return return_string

def _get_sub_dict(self, key_first_part:str) -> dict:
""" Return a sub-dictionary of self, with keys that contain key_first_part.
The first part of the keys are removed, e.g.:
Expand Down
36 changes: 23 additions & 13 deletions narps_open/data/description/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,22 +22,32 @@
'derived'
],
help='the sub dictionary of team description')
formats = parser.add_mutually_exclusive_group(required = False)
formats.add_argument('--json', action='store_true', help='output team description as JSON')
formats.add_argument('--md', action='store_true', help='output team description as Markdown')
arguments = parser.parse_args()

# Initialize a TeamDescription
information = TeamDescription(team_id = arguments.team)

if arguments.dictionary == 'general':
print(dumps(information.general, indent = 4))
elif arguments.dictionary == 'exclusions':
print(dumps(information.exclusions, indent = 4))
elif arguments.dictionary == 'preprocessing':
print(dumps(information.preprocessing, indent = 4))
elif arguments.dictionary == 'analysis':
print(dumps(information.analysis, indent = 4))
elif arguments.dictionary == 'categorized_for_analysis':
print(dumps(information.categorized_for_analysis, indent = 4))
elif arguments.dictionary == 'derived':
print(dumps(information.derived, indent = 4))
# Output description
if arguments.md and arguments.dictionary is not None:
print('Sub dictionaries cannot be exported as Markdown yet.')
print('Print the whole description instead.')
elif arguments.md:
print(information.markdown())
else:
print(dumps(information, indent = 4))
if arguments.dictionary == 'general':
print(dumps(information.general, indent = 4))
elif arguments.dictionary == 'exclusions':
print(dumps(information.exclusions, indent = 4))
elif arguments.dictionary == 'preprocessing':
print(dumps(information.preprocessing, indent = 4))
elif arguments.dictionary == 'analysis':
print(dumps(information.analysis, indent = 4))
elif arguments.dictionary == 'categorized_for_analysis':
print(dumps(information.categorized_for_analysis, indent = 4))
elif arguments.dictionary == 'derived':
print(dumps(information.derived, indent = 4))
else:
print(dumps(information, indent = 4))
14 changes: 13 additions & 1 deletion narps_open/utils/status.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,16 @@

def get_opened_issues():
""" Return a list of opened issues and pull requests for the NARPS Open Pipelines project """

# First get the number of issues of the project
request_url = 'https://api.github.com/repos/Inria-Empenn/narps_open_pipelines'
response = get(request_url, timeout = 2)
response.raise_for_status()
nb_issues = response.json()['open_issues']

# Get all opened issues
request_url = 'https://api.github.com/repos/Inria-Empenn/narps_open_pipelines/issues'
request_url += '?page={page_number}?per_page=100'
request_url += '?page={page_number}?per_page=30'

issues = []
page = True # Will later be replaced by a table
Expand All @@ -31,6 +39,10 @@ def get_opened_issues():
issues += page
page_number += 1

# Leave if there is only one page (in this case, the `page` query parameter has no effect)
if nb_issues < 30:
break

return issues

def get_teams_with_pipeline_files():
Expand Down
36 changes: 36 additions & 0 deletions tests/data/test_description.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,11 @@
pytest -q test_description.py -k <selected_test>
"""

from os.path import join

from pytest import raises, mark

from narps_open.utils.configuration import Configuration
from narps_open.data.description import TeamDescription

class TestUtilsDescription:
Expand Down Expand Up @@ -86,3 +89,36 @@ def test_arguments_properties():
assert description['general.softwares'] == 'FSL 5.0.11, MRIQC, FMRIPREP'
assert isinstance(description.general, dict)
assert description.general['softwares'] == 'FSL 5.0.11, MRIQC, FMRIPREP'

@staticmethod
@mark.unit_test
def test_markdown():
""" Test writing a TeamDescription as Markdown """

# Generate markdown from description
description = TeamDescription('9Q6R')
markdown = description.markdown()

# Compare markdown with test file
test_file_path = join(
Configuration()['directories']['test_data'],
'data', 'description', 'test_markdown.md'
)
with open(test_file_path, 'r', encoding = 'utf-8') as file:
assert markdown == file.read()

@staticmethod
@mark.unit_test
def test_str():
""" Test writing a TeamDescription as JSON """

# Generate report
description = TeamDescription('9Q6R')

# Compare string version of the description with test file
test_file_path = join(
Configuration()['directories']['test_data'],
'data', 'description', 'test_str.json'
)
with open(test_file_path, 'r', encoding = 'utf-8') as file:
assert str(description) == file.read()
3 changes: 2 additions & 1 deletion tests/data/test_results.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,8 @@ def test_rectify():
""" Test the rectify method """

# Get raw data
orig_directory = join(Configuration()['directories']['test_data'], 'results', 'team_2T6S')
orig_directory = join(
Configuration()['directories']['test_data'], 'data', 'results', 'team_2T6S')

# Create test data
test_directory = join(Configuration()['directories']['test_runs'], 'results_team_2T6S')
Expand Down
98 changes: 98 additions & 0 deletions tests/test_data/data/description/test_markdown.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# NARPS team description : 9Q6R
## General
* `teamID` : 9Q6R
* `NV_collection_link` : https://neurovault.org/collections/4765/
* `results_comments` : Note: Amygdala wasn't recruited for hypothesis tests 7-9, but the extended salience network was recruited in all contrasts (e.g. aINS, ACC). Based on looking at the unthresholded maps, hypotheses 8 and 9 would've been confirmed at lower cluster thresholds (i.e. z≥2.3 rather than z≥3.1).
* `preregistered` : No
* `link_preregistration_form` : NA
* `regions_definition` : Harvard-Oxford probabilistic cortical and subcortical atlases (Frontal Median Cortex, L+R Amyg, and L+R Accum for vmPFC, amyg, and VS, respectively). Also used Neurosynth to generate a mask based on the search term "ventral striatum" (height threshold at z>12, and cluster-extent at > 400mm^3)
* `softwares` : FSL 5.0.11, MRIQC, FMRIPREP
* `general_comments` : NA
## Exclusions
* `n_participants` : 104
* `exclusions_details` : N=104 (54 eq_indiff, 50 eq_range). Excluded sub-018, sub-030, sub-088, and sub-100. High motion during function runs: All four participants had at least one run where > 50% of the TRs contained FD > 0.2mm. 18, 30, and 100 in particular were constant movers (all 4 runs > 50% TRS > 0.2 mm FD)
## Preprocessing
* `used_fmriprep_data` : No
* `preprocessing_order` : - MRIQC and FMRIPREP run on a local HPC
- FSL used for mass univariate analyses, avoiding re-registration using this approach: https://www.youtube.com/watch?time_continue=7&v=U3tG7JMEf7M
* `brain_extraction` : Freesurfer (i.e. part of fmriprep default pipeline)
* `segmentation` : Freesurfer
* `slice_time_correction` : Not performed
* `motion_correction` : Framewise displacement, and six standard motion regressors (x, y, z, rotx, rotx, and rotz) within subjects.; generated via MRIQC
* `motion` : 6
* `gradient_distortion_correction` : NA
* `intra_subject_coreg` : bbregister, flirt, default FMRIPREP
* `distortion_correction` : Fieldmap-less distortion correction within fmriprep pipeline (--use-syn-sdc)
* `inter_subject_reg` : ANTs, multiscale nonlinear mutual-information default within FMRIPREP pipeline.
* `intensity_correction` : Default fMRIPREP INU correction
* `intensity_normalization` : Default fMRIPREP INU normalization
* `noise_removal` : None
* `volume_censoring` : None
* `spatial_smoothing` : 5mm FWHM
* `preprocessing_comments` : NA
## Analysis
* `data_submitted_to_model` : 453 total volumes, 104 participants (54 eq_indiff, 50 eq_range)
* `spatial_region_modeled` : Whole-Brain
* `independent_vars_first_level` : Event-related design predictors:
- Modeled duration = 4
- EVs (3): Mean-centered Gain, Mean-Centered Loss, Events (constant)
Block design:
- baseline not explicitly modeled
HRF:
- FMRIB's Linear Optimal Basis Sets
Movement regressors:
- FD, six parameters (x, y, z, RotX, RotY, RotZ)
* `RT_modeling` : none
* `movement_modeling` : 1
* `independent_vars_higher_level` : EVs (2): eq_indiff, eq_range
Contrasts in the group-level design matrix:
1 --> mean (1, 1)
2 --> eq_indiff (1, 0)
3 --> eq_range (0, 1)
4 --> indiff_gr_range (1, -1)
5 --> range_gr_indiff (-1, 1)
* `model_type` : Mass Univariate
* `model_settings` : First model: individual runs;
Second model: higher-level analysis on lower-level FEAT directories in a fixed effects model at the participant-level;
Third model: higher-level analysis on 3D COPE images from *.feat directories within second model *.gfeat; FLAME 1 (FMRIB's Local Analysis of Mixed Effects), with a cluster threshold of z≥3.1
* `inference_contrast_effect` : First-Level A (Run-level; not listed: linear basis functions, FSL FLOBs):
Model EVs (3): gain, loss, event
- COPE1: Pos Gain (1, 0, 0)
- COPE4: Neg Gain (-1, 0, 0)
- COPE7: Pos Loss (0, 1, 0)
- COPE10: Neg Loss (0, -1, 0)
- COPE13: Events (0, 0, 1)
Confound EVs (7): Framewise Displacement, x, y, z, RotX, RotY, RotZ. Generated in MRIQC.

First-Level B (Participant-level):
- All COPEs from the runs modeled in a high-level FEAT fixed effect model

Second-Level (Group-level):
- Separate high-level FLAME 1 models run on COPE1, COPE4, COPE7, and COPE10. Hypotheses 1-4 answered using the COPE1 model, Hypotheses 5-6 answered using the COPE10 model, and Hypotheses 7-9 answered using the COPE7 model.
Model EVs (2): eq_indiff, eq_range
- mean (1, 1)
- eq_indiff (1, 0)
- eq_range (0, 1)
- indiff_gr_range (1, -1)
- range_gr_indiff (-1, 1)
* `search_region` : Whole brain
* `statistic_type` : Cluster size
* `pval_computation` : Standard parametric inference
* `multiple_testing_correction` : GRF_theory based FEW correction at z≥3.1 in FSL
* `comments_analysis` : NA
## Categorized for analysis
* `region_definition_vmpfc` : atlas HOA
* `region_definition_striatum` : atlas HOA, neurosynth
* `region_definition_amygdala` : atlas HOA
* `analysis_SW` : FSL
* `analysis_SW_with_version` : FSL 5.0.11
* `smoothing_coef` : 5
* `testing` : parametric
* `testing_thresh` : p<0.001
* `correction_method` : GRTFWE cluster
* `correction_thresh_` : p<0.05
## Derived
* `n_participants` : 104
* `excluded_participants` : 018, 030, 088, 100
* `func_fwhm` : 5
* `con_fwhm` :
Loading

0 comments on commit 0a584dd

Please sign in to comment.