Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduce recent changes for 0.5.1 #964

Merged
merged 7 commits into from
Oct 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,6 @@ jobs:
run: python -m pip install \
flake8 flake8-absolute-import flake8-black flake8-docstrings \
flake8-isort flake8-pyproject flake8-unused-arguments \
flake8-use-fstring flake8-warnings pep8-naming
flake8-use-fstring pep8-naming
- name: Check xcp_d
run: python -m flake8 xcp_d
18 changes: 16 additions & 2 deletions docs/workflows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -277,13 +277,16 @@ Denoising
=========
:class:`~xcp_d.interfaces.nilearn.DenoiseNifti`, :class:`~xcp_d.interfaces.nilearn.DenoiseCifti`

Temporal censoring
------------------
Temporal censoring [OPTIONAL]
-----------------------------

Prior to confound regression, high-motion volumes will be removed from the BOLD data.
These volumes will also be removed from the nuisance regressors.
Please refer to :footcite:t:`power2012spurious` for more information.

.. tip::
Censoring can be disabled by setting ``--fd-thresh 0``.


Confound regression
-------------------
Expand Down Expand Up @@ -341,6 +344,17 @@ An interpolated version of the ``denoised BOLD`` is then created by filling in t
outlier volumes with cubic spline interpolated data, as implemented in ``Nilearn``.
The resulting ``interpolated, denoised BOLD`` is primarily used for bandpass filtering.

.. warning::
In versions 0.4.0rc2 - 0.5.0, XCP-D used cubic spline interpolation,
followed by bandpass filtering.

However, cubic spline interpolation can introduce large spikes and drops in the signal
when the censored volumes are at the beginning or end of the run,
which are then propagated to the filtered data.

To address this, XCP-D now replaces interpolated volumes at the edges of the run with the
closest non-outlier volume's data, as of 0.5.1.


Bandpass filtering [OPTIONAL]
-----------------------------
Expand Down
3 changes: 1 addition & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ dependencies = [
"matplotlib ~= 3.4.2",
"networkx ~= 2.8.8", # nipype needs networkx, but 3+ isn't compatible with nipype 1.8.5
"nibabel >= 3.2.1",
"nilearn ~= 0.10.0",
"nilearn == 0.10.1", # 0.10.2 raises error with compcor from fMRIPrep 23
"nipype ~= 1.8.5",
"niworkflows == 1.7.3",
"num2words", # for boilerplates
Expand Down Expand Up @@ -74,7 +74,6 @@ tests = [
"flake8-pyproject",
"flake8-unused-arguments",
"flake8-use-fstring",
"flake8-warnings",
"pep8-naming",
"pytest",
"pytest-cov",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -138,16 +138,15 @@
#}
{% include "anatomical_registration_plots.html.jinja" %}

{# Carpet/line plot for pre- and post-regression, concatenate across runs. #}
{% include "concatenated_task_static_plots.html.jinja" %}

{#
Task static plots. One section per run of each task.
1. Task in T1
2. T1 in Task
3. BOLD mean(?) image on the left
4. BOLD reference image on the left
3/4. Pre and post regression carpet/line plots on right.
"Functional Data" section, with BOLD figures.
1. Concatenated resting-state carpet plots.
2. One section per run of each task.
1. Task in T1
2. T1 in Task
3. BOLD mean(?) image on the left
4. BOLD reference image on the left
3/4. Pre and post regression carpet/line plots on right.
#}
{% include "task_static_plots.html.jinja" %}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,9 @@
Start the tasks section and put in the column headings for the task-specific data.

Inputs:
- task_files[]{"task"}
- task_files[]{"run"}
- concatenated_rest_files{"preproc_carpet"}
- concatenated_rest_files{"postproc_carpet"}
- task_files[]{"key"}
- task_files[]{"registration_files"}
- task_files[]{"registration_titles"}
- task_files[]{"bold"}
Expand All @@ -22,11 +23,14 @@
<div class="w3-container">
<div class="w3-row-padding">
<div class="w3-center"><h2>Functional Data</h2></div>

{# Carpet/line plot for pre- and post-regression, concatenate across runs. #}
{% include "concatenated_task_static_plots.html.jinja" %}

<div>
{% for run_dict in task_files %}

{% set task = run_dict["task"] %}
{% set run = run_dict["run"] %}
{% set key = run_dict["key"] %}
{% set registration_files = run_dict["registration_files"] %}
{% set registration_titles = run_dict["registration_titles"] %}
{% set bold = run_dict["bold"] %}
Expand All @@ -38,7 +42,7 @@
Add the task name for the next few rows.
#}
<div class="w3-row"></div>
<div class="w3-left label2">task-{{ task }} run-{{ run }}:</div>
<div class="w3-left label2">{{ key }}:</div>
<div class="w3-row"></div>

{# Full rows for registration files #}
Expand Down
76 changes: 53 additions & 23 deletions xcp_d/interfaces/execsummary.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
import re
from pathlib import Path

import numpy as np
import pandas as pd
from bids.layout import BIDSLayout, Query
from bs4 import BeautifulSoup
from jinja2 import Environment, FileSystemLoader, Markup
Expand Down Expand Up @@ -144,6 +146,24 @@

self.structural_files_ = structural_files

# Collect figures for concatenated resting-state data (if any)
concatenated_rest_files = {}

query = {
"subject": self.subject_id,
"task": "rest",
"run": Query.NONE,
"desc": "preprocESQC",
"suffix": "bold",
"extension": ".svg",
}
concatenated_rest_files["preproc_carpet"] = self._get_bids_file(query)

query["desc"] = "postprocESQC"
concatenated_rest_files["postproc_carpet"] = self._get_bids_file(query)

self.concatenated_rest_files_ = concatenated_rest_files

# Determine the unique entity-sets for the task data.
postproc_files = self.layout.get(
subject=self.subject_id,
Expand All @@ -166,34 +186,42 @@
task_entity_sets = []
for entity_set in unique_entity_sets:
for entity in ORDERING:
entity_set[entity] = entity_set.get(entity, Query.NONE)
entity_set[entity] = entity_set.get(entity, np.nan)

task_entity_sets.append(entity_set)

concatenated_rest_files = {}
# Now sort the entity sets by each entity
task_entity_sets = pd.DataFrame(task_entity_sets)
task_entity_sets = task_entity_sets.sort_values(by=task_entity_sets.columns.tolist())
task_entity_sets = task_entity_sets.fillna(Query.NONE)

query = {
"subject": self.subject_id,
"task": "rest",
"run": Query.NONE,
"desc": "preprocESQC",
"suffix": "bold",
"extension": ".svg",
}
concatenated_rest_files["preproc_carpet"] = self._get_bids_file(query)

query["desc"] = "postcarpetplot"
concatenated_rest_files["postproc_carpet"] = self._get_bids_file(query)
# Extract entities with variability
# This lets us name the sections based on multiple entities (not just task and run)
nunique = task_entity_sets.nunique()
nunique.loc["task"] = 2 # ensure we keep task
nunique.loc["run"] = 2 # ensure we keep run
cols_to_drop = nunique[nunique == 1].index
task_entity_namer = task_entity_sets.drop(cols_to_drop, axis=1)

self.concatenated_rest_files_ = concatenated_rest_files
# Convert back to dictionary
task_entity_sets = task_entity_sets.to_dict(orient="records")
task_entity_namer = task_entity_namer.to_dict(orient="records")

task_files = []

for task_entity_set in task_entity_sets:
task_file_figures = task_entity_set.copy()
task_file_figures[
"key"
] = f"task-{task_entity_set['task']}_run-{task_entity_set.get('run', 0)}"
for i_set, task_entity_set in enumerate(task_entity_sets):
task_file_figures = {}

# Convert any floats in the name to ints
temp_dict = {}
for k, v in task_entity_namer[i_set].items():
try:
temp_dict[k] = int(v)
except (ValueError, TypeError):
temp_dict[k] = v

# String used for subsection headers
task_file_figures["key"] = " ".join([f"{k}-{v}" for k, v in temp_dict.items()])

query = {
"subject": self.subject_id,
Expand Down Expand Up @@ -222,10 +250,12 @@

task_file_figures["registration_files"].append(found_file)

task_files.append(task_file_figures)
# If there no mean BOLD figure, then the "run" was made by the concatenation workflow.
# Skip the concatenated resting-state scan, since it has its own section.
if query["task"] == "rest" and not task_file_figures["bold"]:
continue

Check warning on line 256 in xcp_d/interfaces/execsummary.py

View check run for this annotation

Codecov / codecov/patch

xcp_d/interfaces/execsummary.py#L256

Added line #L256 was not covered by tests

# Sort the files by the desired key
task_files = sorted(task_files, key=lambda d: d["key"])
task_files.append(task_file_figures)

self.task_files_ = task_files

Expand Down
71 changes: 48 additions & 23 deletions xcp_d/interfaces/plotting.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ class QCPlots(SimpleInterface):
output_spec = _QCPlotsOutputSpec

def _run_interface(self, runtime):
# Load confound matrix and load motion with motion filtering
# Load confound matrix and load motion without motion filtering
confounds_df = pd.read_table(self.inputs.fmriprep_confounds_file)
preproc_motion_df = load_motion(
confounds_df.copy(),
Expand All @@ -297,6 +297,7 @@ def _run_interface(self, runtime):
censoring_df = pd.read_table(self.inputs.temporal_mask)
tmask_arr = censoring_df["framewise_displacement"].values
num_censored_volumes = int(tmask_arr.sum())
num_retained_volumes = int((tmask_arr == 0).sum())

# Apply temporal mask to interpolated/full data
rmsd_censored = rmsd[tmask_arr == 0]
Expand Down Expand Up @@ -382,40 +383,57 @@ def _run_interface(self, runtime):

# Calculate QC measures
mean_fd = np.mean(preproc_fd_timeseries)
mean_rms = np.nanmean(rmsd_censored) # first value can be NaN if no dummy scans
mean_fd_post_censoring = np.mean(postproc_fd_timeseries)
mean_relative_rms = np.nanmean(rmsd_censored) # first value can be NaN if no dummy scans
mean_dvars_before_processing = np.mean(dvars_before_processing)
mean_dvars_after_processing = np.mean(dvars_after_processing)
motionDVCorrInit = np.corrcoef(preproc_fd_timeseries, dvars_before_processing)[0][1]
motionDVCorrFinal = np.corrcoef(postproc_fd_timeseries, dvars_after_processing)[0][1]
fd_dvars_correlation_initial = np.corrcoef(preproc_fd_timeseries, dvars_before_processing)[
0, 1
]
fd_dvars_correlation_final = np.corrcoef(postproc_fd_timeseries, dvars_after_processing)[
0, 1
]
rmsd_max_value = np.nanmax(rmsd_censored)

# A summary of all the values
qc_values_dict.update(
{
"meanFD": [mean_fd],
"relMeansRMSMotion": [mean_rms],
"relMaxRMSMotion": [rmsd_max_value],
"meanDVInit": [mean_dvars_before_processing],
"meanDVFinal": [mean_dvars_after_processing],
"mean_fd": [mean_fd],
"mean_fd_post_censoring": [mean_fd_post_censoring],
"mean_relative_rms": [mean_relative_rms],
"max_relative_rms": [rmsd_max_value],
"mean_dvars_initial": [mean_dvars_before_processing],
"mean_dvars_final": [mean_dvars_after_processing],
"num_dummy_volumes": [dummy_scans],
"num_censored_volumes": [num_censored_volumes],
"nVolsRemoved": [dummy_scans],
"motionDVCorrInit": [motionDVCorrInit],
"motionDVCorrFinal": [motionDVCorrFinal],
"num_retained_volumes": [num_retained_volumes],
"fd_dvars_correlation_initial": [fd_dvars_correlation_initial],
"fd_dvars_correlation_final": [fd_dvars_correlation_final],
}
)

qc_metadata = {
"meanFD": {
"mean_fd": {
"LongName": "Mean Framewise Displacement",
"Description": (
"Average framewise displacement without any motion parameter filtering. "
"This value includes high-motion outliers, but not dummy volumes. "
"FD is calculated according to the Power definition."
),
"Units": "mm",
"Units": "mm / volume",
"Term URL": "https://doi.org/10.1016/j.neuroimage.2011.10.018",
},
"mean_fd_post_censoring": {
"LongName": "Mean Framewise Displacement After Censoring",
"Description": (
"Average framewise displacement without any motion parameter filtering. "
"This value does not include high-motion outliers or dummy volumes. "
"FD is calculated according to the Power definition."
),
"Units": "mm / volume",
"Term URL": "https://doi.org/10.1016/j.neuroimage.2011.10.018",
},
"relMeansRMSMotion": {
"mean_relative_rms": {
"LongName": "Mean Relative Root Mean Squared",
"Description": (
"Average relative root mean squared calculated from motion parameters, "
Expand All @@ -424,7 +442,7 @@ def _run_interface(self, runtime):
),
"Units": "arbitrary",
},
"relMaxRMSMotion": {
"max_relative_rms": {
"LongName": "Maximum Relative Root Mean Squared",
"Description": (
"Maximum relative root mean squared calculated from motion parameters, "
Expand All @@ -433,44 +451,51 @@ def _run_interface(self, runtime):
),
"Units": "arbitrary",
},
"meanDVInit": {
"mean_dvars_initial": {
"LongName": "Mean DVARS Before Postprocessing",
"Description": (
"Average DVARS (temporal derivative of root mean squared variance over "
"voxels) calculated from the preprocessed BOLD file, after dummy scan removal."
),
"TermURL": "https://doi.org/10.1016/j.neuroimage.2011.02.073",
},
"meanDVFinal": {
"mean_dvars_final": {
"LongName": "Mean DVARS After Postprocessing",
"Description": (
"Average DVARS (temporal derivative of root mean squared variance over "
"voxels) calculated from the denoised BOLD file."
),
"TermURL": "https://doi.org/10.1016/j.neuroimage.2011.02.073",
},
"num_dummy_volumes": {
"LongName": "Number of Dummy Volumes",
"Description": (
"The number of non-steady state volumes removed from the time series by XCP-D."
),
},
"num_censored_volumes": {
"LongName": "Number of Censored Volumes",
"Description": (
"The number of high-motion outlier volumes censored by XCP-D. "
"This does not include dummy volumes."
),
},
"nVolsRemoved": {
"LongName": "Number of Dummy Volumes",
"num_retained_volumes": {
"LongName": "Number of Retained Volumes",
"Description": (
"The number of non-steady state volumes removed from the time series by XCP-D."
"The number of volumes retained in the denoised dataset. "
"This does not include dummy volumes or high-motion outliers."
),
},
"motionDVCorrInit": {
"fd_dvars_correlation_initial": {
"LongName": "FD-DVARS Correlation Before Postprocessing",
"Description": (
"The Pearson correlation coefficient between framewise displacement and DVARS "
"(temporal derivative of root mean squared variance over voxels), "
"after removal of dummy volumes, but before removal of high-motion outliers."
),
},
"motionDVCorrFinal": {
"fd_dvars_correlation_final": {
"LongName": "FD-DVARS Correlation After Postprocessing",
"Description": (
"The Pearson correlation coefficient between framewise displacement and DVARS "
Expand Down
Loading