diff --git a/INSTALL.md b/INSTALL.md index 872e0b44..0ad3bdfa 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -77,6 +77,7 @@ pip install . Finally, you are able to use the scripts of the project : * `narps_open_runner`: run pipelines +* `narps_open_tester`: run a pipeline and test its results against original ones from the team * `narps_description`: get the textual description made by a team * `narps_results`: download the original results from teams * `narps_open_status`: get status information about the development process of the pipelines @@ -85,6 +86,10 @@ Finally, you are able to use the scripts of the project : # Run the pipeline for team 2T6S, with 40 subjects narps_open_runner -t 2T6S -n 40 +# Run the pipeline for team 08MQ, compare results with original ones, +# and produces a report with correlation values. +narps_open_tester -t 08MQ + # Get the description of team C88N in markdown formatting narps_description -t C88N --md @@ -98,6 +103,7 @@ narps_open_status --json > [!NOTE] > For further information about these command line tools, read the corresponding documentation pages. > * `narps_open_runner` : [docs/running.md](docs/running.md) +> * `narps_open_tester` : [docs/testing.md](docs/testing.md#command-line-tool) > * `narps_description` : [docs/description.md](docs/description.md) -> * `narps_results` : [docs/data.md](docs/data.md) +> * `narps_results` : [docs/data.md](docs/data.md#results-from-narps-teams) > * `narps_open_status` : [docs/status.md](docs/status.md) diff --git a/docs/testing.md b/docs/testing.md index 5294ea9b..1ea3b66c 100644 --- a/docs/testing.md +++ b/docs/testing.md @@ -2,6 +2,13 @@ :mega: This file describes the test suite and features for the project. +## Test dependencies + +Before using the test suite, make sure you installed all the dependencies, after step 5 of the [installation process](docs/install.md), run this command: +```bash +pip install .[tests] +``` + ## Static analysis We use [*pylint*](http://pylint.pycqa.org/en/latest/) to run static code analysis. @@ -24,7 +31,7 @@ black ./narps_open/runner.py ## Automatic tests -Use [*pytest*](https://docs.pytest.org/en/6.2.x/contents.html) to run automatic testing and its [*pytest-cov*](https://pytest-cov.readthedocs.io/en/latest/) plugin to control code coverage. Furthermore, [*pytest-helpers-namespace*](https://pypi.org/project/pytest-helpers-namespace/) enables to register helper functions. +We use [*pytest*](https://docs.pytest.org/en/6.2.x/contents.html) to run automatic testing and its [*pytest-cov*](https://pytest-cov.readthedocs.io/en/latest/) plugin to control code coverage. Furthermore, [*pytest-helpers-namespace*](https://pypi.org/project/pytest-helpers-namespace/) enables to register helper functions. > The pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries. @@ -36,6 +43,21 @@ Tests can be launched manually or while using CI (Continuous Integration). * To run a tests with a given mark 'mark' : `pytest -m 'mark'` * To create code coverage data : `coverage run -m pytest ./tests` then `coverage report` to see the code coverage result or `coverage xml` to output a .xml report file +## Command line tool + +We created the simple command line tool `narps_open_tester` to help testing the outcome of one pipeline. + +> [!WARNING] +> This command must be launched from inside the repository's root directory, because it needs to access the `tests` directory relatively to the current/working directory. + +```bash +narps_open_tester -t 08MQ +``` + +This will run the pipeline for the requested team -here 08MQ- on subsets of subjects (20, 40, 60, 80 and 108). For each subset, the outputs of the pipeline (statistical maps for each of the 9 hypotheses) will be compared with original results from the team using a Pearson correlation computation. At each step, if one of the correlation score is below the threshold (see `correlation_thresholds` defined in `narps_open/utils/configuration/testing_config.toml`), the tests ends. Otherwise, it proceeds to the next step, i.e.: the next subset of subjects. + +Once finished, a text file report (`test_pipeline-*.txt`) is created, containing all the computed correlation values. + ## Configuration files for testing * `pytest.ini` is a global configuration files for using pytest (see reference [here](https://docs.pytest.org/en/7.1.x/reference/customize.html)). It allows to [register markers](https://docs.pytest.org/en/7.1.x/example/markers.html) that help to better identify tests. Note that `pytest.ini` could be replaced by data inside `pyproject.toml` in the next versions. diff --git a/narps_open/tester.py b/narps_open/tester.py new file mode 100644 index 00000000..1a2cf284 --- /dev/null +++ b/narps_open/tester.py @@ -0,0 +1,29 @@ +#!/usr/bin/python +# coding: utf-8 + +""" This module allows to compare pipeline output with original team results """ + +import sys +from argparse import ArgumentParser + +import pytest + +def main(): + """ Entry-point for the command line tool narps_open_tester """ + + # Parse arguments + parser = ArgumentParser(description='Test the pipelines from NARPS.') + parser.add_argument('-t', '--team', type=str, required=True, + help='the team ID') + arguments = parser.parse_args() + + sys.exit(pytest.main([ + '-s', + '-q', + '-x', + f'tests/pipelines/test_team_{arguments.team}.py', + '-m', + 'pipeline_test'])) + +if __name__ == '__main__': + main() diff --git a/setup.py b/setup.py index d28a3dab..91a2d63a 100644 --- a/setup.py +++ b/setup.py @@ -67,6 +67,7 @@ entry_points = { 'console_scripts': [ 'narps_open_runner = narps_open.runner:main', + 'narps_open_tester = narps_open.tester:main', 'narps_open_status = narps_open.utils.status:main', 'narps_description = narps_open.data.description.__main__:main', 'narps_results = narps_open.data.results.__main__:main'