Skip to content

Commit

Permalink
Merge branch 'main' into T54A-refactoring
Browse files Browse the repository at this point in the history
  • Loading branch information
bclenet committed Sep 29, 2023
2 parents 4389d7d + e284b80 commit 96d24e9
Show file tree
Hide file tree
Showing 29 changed files with 1,340 additions and 784 deletions.
13 changes: 8 additions & 5 deletions .github/workflows/code_quality.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,11 @@ on:
push:
paths:
- '**.py'
- '.github/workflows/code_quality.yml'
pull_request:
paths:
- '**.py'
- '.github/workflows/code_quality.yml'

# Jobs that define the workflow
jobs:
Expand All @@ -33,22 +35,23 @@ jobs:
- uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-pylint
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
restore-keys: |
${{ runner.os }}-pip-pylint
${{ runner.os }}-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pylint
pip install .[tests]
- name: Analyse the code with pylint
run: |
pylint --exit-zero narps_open > pylint_report_narps_open.txt
pylint --exit-zero tests > pylint_report_tests.txt
pylint --fail-under 8 --ignore-paths narps_open/pipelines/ narps_open > pylint_report_narps_open.txt
pylint --fail-under 8 tests > pylint_report_tests.txt
- name: Archive pylint results
uses: actions/upload-artifact@v3
if: failure() # Only if previous step failed
with:
name: pylint-reports-python
path: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pipeline_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ jobs:
echo "tests=$test_files" >> $GITHUB_OUTPUT
echo "teams=$teams" >> $GITHUB_OUTPUT
# A job to identify and run the tests
# A job to run the tests
pytest:
needs: identify-tests
runs-on: self-hosted
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test_changes.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
echo $test_files
echo "tests=$test_files" >> $GITHUB_OUTPUT
# A job to list the tests to be run
# A job to run the tests
pytest:
needs: identify-tests
runs-on: self-hosted
Expand Down
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,3 +65,13 @@ To get the pipelines running, please follow the installation steps in [INSTALL.m
## Funding

This project is supported by Région Bretagne (Boost MIND).

## Credits

This project is developed in the Empenn team by Boris Clenet, Elodie Germani, Jeremy Lefort-Besnard and Camille Maumet with contributions by Rémi Gau.

In addition, this project was presented and received contributions during the following events:
- OHBM Brainhack 2022 (June 2022): Elodie Germani, Arshitha Basavaraj, Trang Cao, Rémi Gau, Anna Menacher, Camille Maumet.
- e-ReproNim FENS NENS Cluster Brainhack: <ADD_NAMES_HERE>
- OHBM Brainhack 2023 (July 2023): <ADD_NAMES_HERE>
- ORIGAMI lab hackathon (Sept 2023):
45 changes: 43 additions & 2 deletions docs/description.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The file `narps_open/data/description/analysis_pipelines_derived_descriptions.ts

The class `TeamDescription` of module `narps_open.data.description` acts as a parser for these two files.

You can also use the command-line tool as so. Option `-t` is for the team id, option `-d` allows to print only one of the sub parts of the description among : `general`, `exclusions`, `preprocessing`, `analysis`, and `categorized_for_analysis`.
You can use the command-line tool as so. Option `-t` is for the team id, option `-d` allows to print only one of the sub parts of the description among : `general`, `exclusions`, `preprocessing`, `analysis`, and `categorized_for_analysis`. Options `--json` and `--md` allow to choose the export format you prefer between JSON and Markdown.

```bash
python narps_open/data/description -h
Expand All @@ -21,8 +21,25 @@ python narps_open/data/description -h
# -t TEAM, --team TEAM the team ID
# -d {general,exclusions,preprocessing,analysis,categorized_for_analysis,derived}, --dictionary {general,exclusions,preprocessing,analysis,categorized_for_analysis,derived}
# the sub dictionary of team description
# --json output team description as JSON
# --md output team description as Markdown

python narps_open/data/description -t 2T6S -d general
python narps_open/data/description -t 2T6S --json
# {
# "general.teamID": "2T6S",
# "general.NV_collection_link": "https://neurovault.org/collections/4881/",
# "general.results_comments": "NA",
# "general.preregistered": "No",
# "general.link_preregistration_form": "We did not pre-register our analysis.",
# "general.regions_definition": "We employed the pre-hypothesized brain regions (vmPFC, vSTR, and amygdala) from Barta, McGuire, and Kable (2010, Neuroimage). Specific MNI coordinates are:\nvmPFC: x = 2, y = 46, z = -8\nleft vSTR: x = -12, y = 12, z = -6, right vSTR = x = 12, y = 10, z = -6\n(right) Amygdala: x = 24, y = -4, z = -18",
# "general.softwares": "SPM12 , \nfmriprep 1.1.4",
# "exclusions.n_participants": "108",
# "exclusions.exclusions_details": "We did not exclude any participant in the analysis",
# "preprocessing.used_fmriprep_data": "Yes",
# "preprocessing.preprocessing_order": "We used the provided preprocessed data by fMRIPprep 1.1.4 (Esteban, Markiewicz, et al. (2018); Esteban, Blair, et al. (2018); RRID:SCR_016216), which is based on Nipype 1.1.1 (Gorgolewski et al. (2011); Gorgolewski et al. (2018); RRID:SCR_002502) and we additionally conducted a spatial smoothing using the provided preprocessed data set and SPM12. Here, we attach the preprocessing steps described in the provided data set. \nAnatomical data preprocessing\nThe T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) using N4BiasFieldCorrection (Tustison et al. 2010, ANTs 2.2.0), and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped using antsBrainExtraction.sh (ANTs 2.2.0), using OASIS as target template. Brain surfaces we
# ...

python narps_open/data/description -t 2T6S -d general --json
# {
# "teamID": "2T6S",
# "NV_collection_link": "https://neurovault.org/collections/4881/",
Expand All @@ -33,6 +50,30 @@ python narps_open/data/description -t 2T6S -d general
# "softwares": "SPM12 , \nfmriprep 1.1.4",
# "general_comments": "NA"
# }

python narps_open/data/description -t 2T6S --md
# # NARPS team description : 2T6S
# ## General
# * `teamID` : 2T6S
# * `NV_collection_link` : https://neurovault.org/collections/4881/
# * `results_comments` : NA
# * `preregistered` : No
# * `link_preregistration_form` : We did not pre-register our analysis.
# * `regions_definition` : We employed the pre-hypothesized brain regions (vmPFC, vSTR, and amygdala) from Barta, McGuire, and Kable (2010, Neuroimage). Specific MNI coordinates are:
# vmPFC: x = 2, y = 46, z = -8
# left vSTR: x = -12, y = 12, z = -6, right vSTR = x = 12, y = 10, z = -6
# (right) Amygdala: x = 24, y = -4, z = -18
# * `softwares` : SPM12 ,
# fmriprep 1.1.4
# * `general_comments` : NA
# ## Exclusions
# * `n_participants` : 108
# * `exclusions_details` : We did not exclude any participant in the analysis
# ## Preprocessing
# * `used_fmriprep_data` : Yes
# * `preprocessing_order` : We used the provided preprocessed data by fMRIPprep 1.1.4 (Esteban, Markiewicz, et al. (2018); Esteban, Blair, et al. (2018); RRID:SCR_016216), which is based on Nipype 1.1.1 (Gorgolewski et al. (2011); Gorgolewski et al. (2018); RRID:SCR_002502) and we additionally conducted a spatial smoothing using the provided preprocessed data set and SPM12. Here, we attach the preprocessing steps described in the provided data set.
# Anatomical data preprocessing
# ...
```

Of course the `narps_open.data.description` module is accessible programmatically, here is an example on how to use it:
Expand Down
2 changes: 1 addition & 1 deletion docs/status.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ print(pipeline_info['status'])
report.markdown() # Returns a string containing the markdown
```

You can also use the command-line tool as so. Option `-t` is for the team id, option `-d` allows to print only one of the sub parts of the description among : `general`, `exclusions`, `preprocessing`, `analysis`, and `categorized_for_analysis`.
You can also use the command-line tool as so.

```bash
python narps_open/utils/status -h
Expand Down
33 changes: 33 additions & 0 deletions narps_open/data/description/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

from os.path import join
from csv import DictReader
from json import dumps
from importlib_resources import files

class TeamDescription(dict):
Expand All @@ -25,6 +26,9 @@ def __init__(self, team_id):
self.team_id = team_id
self._load()

def __str__(self):
return dumps(self, indent = 4)

@property
def general(self) -> dict:
""" Getter for the sub dictionary general """
Expand Down Expand Up @@ -55,6 +59,35 @@ def derived(self) -> dict:
""" Getter for the sub dictionary containing derived team description """
return self._get_sub_dict('derived')

def markdown(self):
""" Return the team description as a string formatted in markdown """
return_string = f'# NARPS team description : {self.team_id}\n'

dictionaries = [
self.general,
self.exclusions,
self.preprocessing,
self.analysis,
self.categorized_for_analysis,
self.derived
]

names = [
'General',
'Exclusions',
'Preprocessing',
'Analysis',
'Categorized for analysis',
'Derived'
]

for dictionary, name in zip(dictionaries, names):
return_string += f'## {name}\n'
for key in dictionary:
return_string += f'* `{key}` : {dictionary[key]}\n'

return return_string

def _get_sub_dict(self, key_first_part:str) -> dict:
""" Return a sub-dictionary of self, with keys that contain key_first_part.
The first part of the keys are removed, e.g.:
Expand Down
36 changes: 23 additions & 13 deletions narps_open/data/description/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,22 +22,32 @@
'derived'
],
help='the sub dictionary of team description')
formats = parser.add_mutually_exclusive_group(required = False)
formats.add_argument('--json', action='store_true', help='output team description as JSON')
formats.add_argument('--md', action='store_true', help='output team description as Markdown')
arguments = parser.parse_args()

# Initialize a TeamDescription
information = TeamDescription(team_id = arguments.team)

if arguments.dictionary == 'general':
print(dumps(information.general, indent = 4))
elif arguments.dictionary == 'exclusions':
print(dumps(information.exclusions, indent = 4))
elif arguments.dictionary == 'preprocessing':
print(dumps(information.preprocessing, indent = 4))
elif arguments.dictionary == 'analysis':
print(dumps(information.analysis, indent = 4))
elif arguments.dictionary == 'categorized_for_analysis':
print(dumps(information.categorized_for_analysis, indent = 4))
elif arguments.dictionary == 'derived':
print(dumps(information.derived, indent = 4))
# Output description
if arguments.md and arguments.dictionary is not None:
print('Sub dictionaries cannot be exported as Markdown yet.')
print('Print the whole description instead.')
elif arguments.md:
print(information.markdown())
else:
print(dumps(information, indent = 4))
if arguments.dictionary == 'general':
print(dumps(information.general, indent = 4))
elif arguments.dictionary == 'exclusions':
print(dumps(information.exclusions, indent = 4))
elif arguments.dictionary == 'preprocessing':
print(dumps(information.preprocessing, indent = 4))
elif arguments.dictionary == 'analysis':
print(dumps(information.analysis, indent = 4))
elif arguments.dictionary == 'categorized_for_analysis':
print(dumps(information.categorized_for_analysis, indent = 4))
elif arguments.dictionary == 'derived':
print(dumps(information.derived, indent = 4))
else:
print(dumps(information, indent = 4))
2 changes: 1 addition & 1 deletion narps_open/pipelines/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@
'O6R6': None,
'P5F3': None,
'Q58J': None,
'Q6O0': None,
'Q6O0': 'PipelineTeamQ6O0',
'R42Q': None,
'R5K7': None,
'R7D1': None,
Expand Down
Loading

0 comments on commit 96d24e9

Please sign in to comment.