Skip to content

Commit

Permalink
[DOC] adding template for pipeline testing
Browse files Browse the repository at this point in the history
  • Loading branch information
bclenet committed Sep 22, 2023
1 parent 6715950 commit 57b8c86
Show file tree
Hide file tree
Showing 3 changed files with 108 additions and 4 deletions.
4 changes: 2 additions & 2 deletions docs/ci-cd.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,10 +35,10 @@ For now, the following workflows are set up:
| Name / File | What does it do ? | When is it launched ? | Where does it run ? | How can I see the results ? |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| [code_quality](/.github/workflows/code_quality.yml) | A static analysis of the python code (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request if there are changes on `.py` files. | On GitHub servers. | Outputs (logs of pylint) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. |
| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](https://github.com/codespell-project/codespell). | For every push or pull_request to the `maint` branch. | On GitHub servers. | Outputs (logs of codespell) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. |
| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](https://github.com/codespell-project/codespell). | For every push or pull_request to the `main` branch. | On GitHub servers. | Typos are displayed in the workflow summary. |
| [pipeline_tests](/.github/workflows/pipelines.yml) | Runs all the tests for changed pipelines. | For every push or pull_request, if a pipeline file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. |
| [test_changes](/.github/workflows/test_changes.yml) | It runs all the changed tests for the project. | For every push or pull_request, if a test file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. |
| [unit_testing](/.github/workflows/unit_testing.yml) | It runs all the unit tests for the project (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request, if a file changed inside `narps_open/`, or a file related to test execution. | On GitHub servers. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. |
| [unit_testing](/.github/workflows/unit_testing.yml) | It runs all the unit tests for the project (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request, if a file changed inside `narps_open/`, or a file related to test execution. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. |

### Cache

Expand Down
7 changes: 5 additions & 2 deletions docs/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,9 @@ As explained before, all pipeline inherit from the `narps_open.pipelines.Pipelin

## Test your pipeline

First have a look at the [testing topic of the documentation](/docs/testing.md). It explains how testing works for inside project and how you should write the tests related to your pipeline.
First have a look at the [testing page of the documentation](/docs/testing.md). It explains how testing works for the project and how you should write the tests related to your pipeline.

Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives a good example.
All tests must be contained in a single file named `tests/pipelines/test_team_<team_id>.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/test_team_XXXX.py](/tests/pipelines/templates/test_team_XXXX.py) inside the `tests/pipelines/` directory, replacing `XXXX` with the team id. Then, follow the tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document as well.

> [!NOTE]
> Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives an example.
101 changes: 101 additions & 0 deletions tests/pipelines/templates/test_team_XXXX.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
#!/usr/bin/python
# coding: utf-8

""" This template can be use to test a pipeline.
- Replace all occurrences of XXXX by the actual id of the team.
- All lines starting with [INFO], are meant to help you during the reproduction, these can be removed
eventually.
- Also remove lines starting with [TODO], once you did what they suggested.
- Remove this docstring once you are done with coding the tests.
"""

""" Tests of the 'narps_open.pipelines.team_XXXX' module.
Launch this test with PyTest
Usage:
======
pytest -q test_team_XXXX.py
pytest -q test_team_XXXX.py -k <selected_test>
"""

# [INFO] About these imports :
# [INFO] - pytest.helpers allows to use the helpers registered in tests/conftest.py
# [INFO] - pytest.mark allows to categorize tests as unitary or pipeline tests
from pytest import helpers, mark

from nipype import Workflow

# [INFO] Of course, import the class you want to test, here the Pipeline class for the team XXXX
from narps_open.pipelines.team_XXXX import PipelineTeamXXXX

# [INFO] All tests should be contained in the following class, in order to sort them.
class TestPipelinesTeamXXXX:
""" A class that contains all the unit tests for the PipelineTeamXXXX class."""

# [TODO] Write one or several unit_test (and mark them as such)
# [TODO] ideally for each method of the class you test.

# [INFO] Here is one example for the __init__() method
@staticmethod
@mark.unit_test
def test_create():
""" Test the creation of a PipelineTeamXXXX object """

pipeline = PipelineTeamXXXX()
assert pipeline.fwhm == 8.0
assert pipeline.team_id == 'XXXX'

# [INFO] Here is one example for the methods returning workflows
@staticmethod
@mark.unit_test
def test_workflows():
""" Test the workflows of a PipelineTeamXXXX object """

pipeline = PipelineTeamXXXX()
assert pipeline.get_preprocessing() is None
assert pipeline.get_run_level_analysis() is None
assert isinstance(pipeline.get_subject_level_analysis(), Workflow)
group_level = pipeline.get_group_level_analysis()

assert len(group_level) == 3
for sub_workflow in group_level:
assert isinstance(sub_workflow, Workflow)

# [INFO] Here is one example for the methods returning outputs
@staticmethod
@mark.unit_test
def test_outputs():
""" Test the expected outputs of a PipelineTeamXXXX object """
pipeline = PipelineTeamXXXX()

# 1 - 1 subject outputs
pipeline.subject_list = ['001']
assert len(pipeline.get_preprocessing_outputs()) == 0
assert len(pipeline.get_run_level_outputs()) == 0
assert len(pipeline.get_subject_level_outputs()) == 7
assert len(pipeline.get_group_level_outputs()) == 63
assert len(pipeline.get_hypotheses_outputs()) == 18

# 2 - 4 subjects outputs
pipeline.subject_list = ['001', '002', '003', '004']
assert len(pipeline.get_preprocessing_outputs()) == 0
assert len(pipeline.get_run_level_outputs()) == 0
assert len(pipeline.get_subject_level_outputs()) == 28
assert len(pipeline.get_group_level_outputs()) == 63
assert len(pipeline.get_hypotheses_outputs()) == 18

# [TODO] Feel free to add other methods, e.g. to test the custom node functions of the pipeline

# [TODO] Write one pipeline_test (and mark it as such)

# [INFO] The pipeline_test will most likely be exactly written this way :
@staticmethod
@mark.pipeline_test
def test_execution():
""" Test the execution of a PipelineTeamXXXX and compare results """

# [INFO] We use the `test_pipeline_evaluation` helper which is responsible for running the
# [INFO] pipeline, iterating over subjects and comparing output with expected results.
helpers.test_pipeline_evaluation('XXXX')

0 comments on commit 57b8c86

Please sign in to comment.