From 9fa3b8239801099175845fc3875aa0802e0fccca Mon Sep 17 00:00:00 2001 From: smoia Date: Mon, 8 Jan 2024 11:57:44 +0100 Subject: [PATCH] More to the switch to AMA --- main.tex | 4 ++-- summaries/VASOMOSAIC.tex | 4 ++-- summaries/ahead-project.tex | 4 ++-- summaries/brainhack-cloud.tex | 6 +++--- summaries/datalad-catalog.tex | 4 ++-- summaries/datalad-dataverse.tex | 2 +- summaries/exploding_brains.tex | 2 +- summaries/flux.tex | 6 +++--- summaries/hyppomriqc.tex | 10 +++++----- summaries/metadata-community.tex | 2 +- summaries/narps-open-pipelines.tex | 2 +- summaries/neurocausal.tex | 4 ++-- summaries/physiopy-documentation.tex | 4 ++-- summaries/rba.tex | 12 ++++++------ 14 files changed, 33 insertions(+), 33 deletions(-) diff --git a/main.tex b/main.tex index 31f539e..a2da02f 100644 --- a/main.tex +++ b/main.tex @@ -291,7 +291,7 @@ \section*{Introduction} The Organisation of Human Brain Mapping BrainHack (shortened to OHBM Brainhack in the article) is a yearly satellite event of the main OHBM meeting, organised by the Open Science Special Interest Group following -the model of Brainhack hackathons \citep{Gau2021}. +the model of Brainhack hackathons\citep{Gau2021}. Where other hackathons set up a competitive environment based on outperforming other participants' projects, Brainhacks fosters a collaborative environment in which participants can freely collaborate @@ -503,7 +503,7 @@ \section{Conclusion and future directions} The organisation managed to provide a positive onsite environment, aiming to allow participants to self-organise in the spirit of the -Brainhack \citep{Gau2021}, with plenty of moral - and physical - support. +Brainhack\citep{Gau2021}, with plenty of moral - and physical - support. The technical setup, based on heavy automatisation flow to allow project submission to be streamlined, was a fundamental help to the organisation diff --git a/summaries/VASOMOSAIC.tex b/summaries/VASOMOSAIC.tex index 071db4b..55e6f1e 100644 --- a/summaries/VASOMOSAIC.tex +++ b/summaries/VASOMOSAIC.tex @@ -11,7 +11,7 @@ \subsection{MOSAIC for VASO fMRI} \"Omer Faruk G\"ulban, % Benedikt A. Poser} -Vascular Space Occupancy (VASO) is a functional magnetic resonance imaging (fMRI) method that is used for high-resolution cortical layer-specific imaging \citep{Huber2021a}. Currently, the most popular sequence for VASO at modern SIEMENS scanners is the one by \textcite{Stirnberg2021a} from the DZNE in Bonn, which is employed at more than 30 research labs worldwide. This sequence concomitantly acquires fMRI BOLD and blood volume signals. In the SIEMENS' reconstruction pipeline, these two complementary fMRI contrasts are mixed together within the same time series, making the outputs counter-intuitive for users. Specifically: +Vascular Space Occupancy (VASO) is a functional magnetic resonance imaging (fMRI) method that is used for high-resolution cortical layer-specific imaging\citep{Huber2021a}. Currently, the most popular sequence for VASO at modern SIEMENS scanners is the one by \textcite{Stirnberg2021a} from the DZNE in Bonn, which is employed at more than 30 research labs worldwide. This sequence concomitantly acquires fMRI BOLD and blood volume signals. In the SIEMENS' reconstruction pipeline, these two complementary fMRI contrasts are mixed together within the same time series, making the outputs counter-intuitive for users. Specifically: \begin{itemize} \item The 'raw' NIfTI converted time-series are not BIDS compatible (see \href{https://github.com/bids-standard/bids-specification/issues/1001}{https://github.com/bids-standard/bids-specification/issues/1001}). @@ -21,7 +21,7 @@ \subsection{MOSAIC for VASO fMRI} Workarounds with 3D distortion correction, results in interpolation artifacts. Alternative workarounds without MOSAIC decorators result in unnecessarily large data sizes. -In the previous Brainhack \citep{Gau2021}, we extended the existing 3D-MOSAIC functor that was previously developed by Benedikt Poser and Philipp Ehses. This functor had been previously used to sort volumes of images by dimensions of echo-times, by RF-channels, and by magnitude and phase signals. In this Brainhack, we successfully extended and validated this functor to also support the dimensionality of SETs (that is representing BOLD and VASO contrast). +In the previous Brainhack\citep{Gau2021}, we extended the existing 3D-MOSAIC functor that was previously developed by Benedikt Poser and Philipp Ehses. This functor had been previously used to sort volumes of images by dimensions of echo-times, by RF-channels, and by magnitude and phase signals. In this Brainhack, we successfully extended and validated this functor to also support the dimensionality of SETs (that is representing BOLD and VASO contrast). We are happy to share the compiled SIEMENS ICE (Image Calculation Environment) functor that does this sorting. Current VASO users, who want to upgrade their reconstruction pipeline to get the MOSAIC sorting feature too, can reach out to Renzo Huber (RenzoHuber@gmail.com) or R\"udiger Stirnberg (Ruediger.Stirnberg@dzne.de). diff --git a/summaries/ahead-project.tex b/summaries/ahead-project.tex index 0cc6844..8bd432a 100644 --- a/summaries/ahead-project.tex +++ b/summaries/ahead-project.tex @@ -17,11 +17,11 @@ \subsubsection{Introduction} \subsubsection{Results} -Visualization and annotation of large neuroimaging data sets can be challenging, in particular for collaborative data exploration. Here we tested two different infrastructures: BrainBox \url{https://brainbox.pasteur.fr/}, a web-based visualization and annotation tool for collaborative manual delineation of brain MRI data, see e.g. \citep{heuer_evolution_2019}, and Dandi Archive \url{https://dandiarchive.org/}, an online repository of microscopy data with links to Neuroglancer \url{https://github.com/google/neuroglancer}. While Brainbox could not handle the high resolution data well, Neuroglancer visualization was successful after conversion to the Zarr microscopy format (\Cref{fig:ahead}A). +Visualization and annotation of large neuroimaging data sets can be challenging, in particular for collaborative data exploration. Here we tested two different infrastructures: BrainBox \url{https://brainbox.pasteur.fr/}, a web-based visualization and annotation tool for collaborative manual delineation of brain MRI data, see e.g.\citep{heuer_evolution_2019}, and Dandi Archive \url{https://dandiarchive.org/}, an online repository of microscopy data with links to Neuroglancer \url{https://github.com/google/neuroglancer}. While Brainbox could not handle the high resolution data well, Neuroglancer visualization was successful after conversion to the Zarr microscopy format (\Cref{fig:ahead}A). To help users explore the original high-resolution microscopy sections, we also built a python notebook to automatically query the stains around a given MNI coordinate using the Nighres toolbox~\citep{huntenburg_nighres_2018} (\Cref{fig:ahead}B). -For the cortical profile analysis we restricted our analysis on S1 (BA3b) as a part of the somato-motor area from one hemisphere of an individual human brain. S1 is rather thin (\(\sim\)2mm) and it has a highly myelinated layer 4 (see arrow \Cref{fig:ahead}C). In a future step, we are aiming to characterize differences between S1 (BA3b) and M1 (BA4). For now, we used the MRI-quantitative-R1 contrast to define, segment the region of interest and compute cortical depth measurement. In ITK-SNAP \citep{Yushkevich2006} we defined the somato-motor area by creating a spherical mask (radius 16.35mm) around the ‘hand knob’ in M1. To improve the intensity homogeneity of the qMRI-R1 images, we ran a bias field correction (N4BiasFieldCorrection, \citep{Cox1996}). Tissue segmentation was restricted to S1 and was obtained by combining four approaches: (i) fsl-fast \citep{Smith2004} for initial tissues probability map, (ii) semi-automatic histogram fitting in ITK-SNAP, (iii) Segmentator \citep{Gulban2018}, and (iv) manual editing. We used the LN2\_LAYERS program from LAYNII open source software \citep{Huber2021} to compute the equi-volume cortical depth measurements for the gray matter. Finally, we evaluated cortical depth profiles for three quantitative MRI contrasts (R1, R2, proton density) and three microscopy contrasts (thionin, bieloschowsky, parvalbumin) by computing a voxel-wise 2D histogram of image intensity (\Cref{fig:ahead}C). Some challenges are indicated by arrows 2 and 3 in the lower part of \Cref{fig:ahead}C. +For the cortical profile analysis we restricted our analysis on S1 (BA3b) as a part of the somato-motor area from one hemisphere of an individual human brain. S1 is rather thin (\(\sim\)2mm) and it has a highly myelinated layer 4 (see arrow \Cref{fig:ahead}C). In a future step, we are aiming to characterize differences between S1 (BA3b) and M1 (BA4). For now, we used the MRI-quantitative-R1 contrast to define, segment the region of interest and compute cortical depth measurement. In ITK-SNAP\citep{Yushkevich2006} we defined the somato-motor area by creating a spherical mask (radius 16.35mm) around the ‘hand knob’ in M1. To improve the intensity homogeneity of the qMRI-R1 images, we ran a bias field correction (N4BiasFieldCorrection,\citep{Cox1996}). Tissue segmentation was restricted to S1 and was obtained by combining four approaches: (i) fsl-fast\citep{Smith2004} for initial tissues probability map, (ii) semi-automatic histogram fitting in ITK-SNAP, (iii) Segmentator\citep{Gulban2018}, and (iv) manual editing. We used the LN2\_LAYERS program from LAYNII open source software\citep{Huber2021} to compute the equi-volume cortical depth measurements for the gray matter. Finally, we evaluated cortical depth profiles for three quantitative MRI contrasts (R1, R2, proton density) and three microscopy contrasts (thionin, bieloschowsky, parvalbumin) by computing a voxel-wise 2D histogram of image intensity (\Cref{fig:ahead}C). Some challenges are indicated by arrows 2 and 3 in the lower part of \Cref{fig:ahead}C. From this Brainhack project, we conclude that the richness of the data set must be exploited from multiple points of view, from enhancing the integration of MRI with microscopy data in visualization software to providing optimized multi-contrast and multi-modality data analysis pipeline for high-resolution brain regions. diff --git a/summaries/brainhack-cloud.tex b/summaries/brainhack-cloud.tex index 25371cf..fa56496 100644 --- a/summaries/brainhack-cloud.tex +++ b/summaries/brainhack-cloud.tex @@ -11,11 +11,11 @@ \subsection{Brainhack Cloud} Samuel Guay, % Johanna Bayer} -Today’s neuroscientific research deals with vast amounts of electrophysiological, neuroimaging and behavioural data. The progress in the field is enabled by the widespread availability of powerful computing and storage resources. Cloud computing in particular offers the opportunity to flexibly scale resources and it enables global collaboration across institutions. However, cloud computing is currently not widely used in the neuroscience field, although it could provide important scientific, economical, and environmental gains considering its effect in collaboration and sustainability \citep{apon2015, OracleSustainabilty}. One problem is the availability of cloud resources for researchers, because Universities commonly only provide on-premise high performance computing resources. The second problem is that many researchers lack the knowledge on how to efficiently use cloud resources. This project aims to address both problems by providing free access to cloud resources for the brain imaging community and by providing targeted training and support. +Today’s neuroscientific research deals with vast amounts of electrophysiological, neuroimaging and behavioural data. The progress in the field is enabled by the widespread availability of powerful computing and storage resources. Cloud computing in particular offers the opportunity to flexibly scale resources and it enables global collaboration across institutions. However, cloud computing is currently not widely used in the neuroscience field, although it could provide important scientific, economical, and environmental gains considering its effect in collaboration and sustainability\citep{apon2015, OracleSustainabilty}. One problem is the availability of cloud resources for researchers, because Universities commonly only provide on-premise high performance computing resources. The second problem is that many researchers lack the knowledge on how to efficiently use cloud resources. This project aims to address both problems by providing free access to cloud resources for the brain imaging community and by providing targeted training and support. -A team of brainhack volunteers (https://brainhack.org/brainhack\_cloud/admins/team/) applied for Oracle Cloud Credits to support open-source projects in and around brainhack with cloud resources. The project was generously funded by Oracle Cloud for Research \citep{OracleResearch} with \$230,000.00 AUD from the 29th of January 2022 until the 28th of January 2024. To facilitate the uptake of cloud computing in the field, the team built several resources (https://brainhack.org/brainhack\_cloud/tutorials/) to lower the entry barriers for members of the Brainhack community. +A team of brainhack volunteers (https://brainhack.org/brainhack\_cloud/admins/team/) applied for Oracle Cloud Credits to support open-source projects in and around brainhack with cloud resources. The project was generously funded by Oracle Cloud for Research\citep{OracleResearch} with \$230,000.00 AUD from the 29th of January 2022 until the 28th of January 2024. To facilitate the uptake of cloud computing in the field, the team built several resources (https://brainhack.org/brainhack\_cloud/tutorials/) to lower the entry barriers for members of the Brainhack community. -During the 2022 Brainhack, the team gave a presentation to share the capabilities that cloud computing offers to the Brainhack community, how they can place their resource requests and where they can get help. In total 11 projects were onboarded to the cloud and supported in their specific use cases: One team utilised the latest GPU architecture to take part in the Anatomical Tracings of Lesions After Stroke Grand Challenge. Others developed continuous integration tests for their tools using for example a full Slurm HPC cluster in the cloud to test how their tool behaves in such an environment. Another group deployed the Neurodesk.org \citep{NeuroDesk} project on a Kubernetes cluster to make it available for a student cohort to learn about neuroimage processing and to get access to all neuroimaging tools via the browser. All projects will have access to these cloud resources until 2024 and we are continuously onboarding new projects onto the cloud (https://brainhack.org/brainhack\_cloud/docs/request/). +During the 2022 Brainhack, the team gave a presentation to share the capabilities that cloud computing offers to the Brainhack community, how they can place their resource requests and where they can get help. In total 11 projects were onboarded to the cloud and supported in their specific use cases: One team utilised the latest GPU architecture to take part in the Anatomical Tracings of Lesions After Stroke Grand Challenge. Others developed continuous integration tests for their tools using for example a full Slurm HPC cluster in the cloud to test how their tool behaves in such an environment. Another group deployed the Neurodesk.org\citep{NeuroDesk} project on a Kubernetes cluster to make it available for a student cohort to learn about neuroimage processing and to get access to all neuroimaging tools via the browser. All projects will have access to these cloud resources until 2024 and we are continuously onboarding new projects onto the cloud (https://brainhack.org/brainhack\_cloud/docs/request/). The Brainhack Cloud team plans to run a series of training modules in various Brainhack events throughout the year to reach researchers from various backgrounds and increase their familiarity with the resources provided for the community while providing free and fair access to the computational resources. The training modules will cover how to use and access computing and storage resources (e.g., generating SSH keys), to more advanced levels covering the use of cloud native technology like software containers (e.g., Docker/Singularity), container orchestration (e.g., Kubernetes), object storage (e.g, S3), and infrastructure as code (e.g., Terraform). diff --git a/summaries/datalad-catalog.tex b/summaries/datalad-catalog.tex index 55775ee..9e01e3f 100644 --- a/summaries/datalad-catalog.tex +++ b/summaries/datalad-catalog.tex @@ -17,11 +17,11 @@ \subsection{DataLad Catalog} Remi Gau, % Yaroslav O. Halchenko} -The importance and benefits of making research data Findable, Accessible, Interoperable, and Reusable are clear \citep{Wilkinson2016}. But of equal importance is our ethical and legal obligations to protect the personal data privacy of research participants. So we are struck with this apparent contradiction: how can we share our data openly…yet keep it secure and protected? +The importance and benefits of making research data Findable, Accessible, Interoperable, and Reusable are clear\citep{Wilkinson2016}. But of equal importance is our ethical and legal obligations to protect the personal data privacy of research participants. So we are struck with this apparent contradiction: how can we share our data openly…yet keep it secure and protected? To address this challenge: structured, linked, and machine-readable metadata presents a powerful opportunity. Metadata provides not only high-level information about our research data (such as study and data acquisition parameters) but also the descriptive aspects of each file in the dataset: such as file paths, sizes, and formats. With this metadata, we can create an abstract representation of the full dataset that is separate from the actual data content. This means that the content can be stored securely, while we openly share the metadata to make our work more FAIR. -In practice, the distributed data management system DataLad \citep{Halchenko2021} and its extensions for metadata handling and catalog generation are capable of delivering such solutions. \texttt{datalad} (github.com/datalad/datalad) can be used for decentralised management of data as lightweight, portable and extensible representations. \texttt{datalad-metalad} (github.com/datalad/datalad-metalad) can extract structured high- and low-level metadata and associate it with these datasets or with individual files. And at the end of the workflow, \texttt{datalad-catalog} (\url{github.com/datalad/datalad-catalog}) can turn the structured metadata into a user-friendly data browser. +In practice, the distributed data management system DataLad\citep{Halchenko2021} and its extensions for metadata handling and catalog generation are capable of delivering such solutions. \texttt{datalad} (github.com/datalad/datalad) can be used for decentralised management of data as lightweight, portable and extensible representations. \texttt{datalad-metalad} (github.com/datalad/datalad-metalad) can extract structured high- and low-level metadata and associate it with these datasets or with individual files. And at the end of the workflow, \texttt{datalad-catalog} (\url{github.com/datalad/datalad-catalog}) can turn the structured metadata into a user-friendly data browser. This hackathon project focused on the first round of user testing of the alpha version of \texttt{datalad-catalog}, by creating the first ever user-generated catalog (\url{https://jkosciessa.github.io/datalad_cat_test}). Further results included a string of new issues focusing on improving user experience, detailed notes on how to generate a catalog from scratch, and code additions to allow the loading of local web-assets so that any generated catalog can also be viewed offline. diff --git a/summaries/datalad-dataverse.tex b/summaries/datalad-dataverse.tex index 0b4335f..0c3c7d9 100644 --- a/summaries/datalad-dataverse.tex +++ b/summaries/datalad-dataverse.tex @@ -19,7 +19,7 @@ \subsection{DataLad-Dataverse integration} Michael Hanke, % Nadine Spychala} -The FAIR principles \citep{Wilkinson2016} advocate to ensure and increase the Findability, Accessibility, Interoperability, and Reusability of research data in order to maximize their impact. Many open source software tools and services facilitate this aim. Among them is the Dataverse project \citep{King2007}. Dataverse is open source software for storing and sharing research data, providing technical means for public distribution and archival of digital research data, and their annotation with structured metadata. It is employed by dozens of private or public institutions worldwide for research data management and data publication. DataLad \citep{Halchenko2021}, similarly, is an open source tool for data management and data publication. It provides Git- and git-annex based data versioning, provenance tracking, and decentral data distribution as its core features. One of its central development drivers is to provide streamlined interoperability with popular data hosting services to both simplify and robustify data publication and data consumption in a decentralized research data management system \citep{Hanke2021}. Past developments include integrations with the open science framework \citep{Hanke2020} or webdav-based services such as sciebo, nextcloud, or the European Open Science Cloud \citep{Halchenko2022}. +The FAIR principles\citep{Wilkinson2016} advocate to ensure and increase the Findability, Accessibility, Interoperability, and Reusability of research data in order to maximize their impact. Many open source software tools and services facilitate this aim. Among them is the Dataverse project\citep{King2007}. Dataverse is open source software for storing and sharing research data, providing technical means for public distribution and archival of digital research data, and their annotation with structured metadata. It is employed by dozens of private or public institutions worldwide for research data management and data publication. DataLad\citep{Halchenko2021}, similarly, is an open source tool for data management and data publication. It provides Git- and git-annex based data versioning, provenance tracking, and decentral data distribution as its core features. One of its central development drivers is to provide streamlined interoperability with popular data hosting services to both simplify and robustify data publication and data consumption in a decentralized research data management system\citep{Hanke2021}. Past developments include integrations with the open science framework\citep{Hanke2020} or webdav-based services such as sciebo, nextcloud, or the European Open Science Cloud\citep{Halchenko2022}. In this hackathon project, we created a proof-of-principle integration of DataLad with Dataverse in the form of the Python package \texttt{datalad-dataverse} (\url{github.com/datalad/datalad-dataverse}). From a technical perspective, main achievements include the implementation of a git-annex special remote protocol for communicating with Dataverse instances, a new \texttt{create-sibling-dataverse} command that is added to the DataLad command-line and Python API by the \texttt{datalad-dataverse} extension, and standard research software engineering aspects of scientific software such as unit tests, continuous integration, and documentation. diff --git a/summaries/exploding_brains.tex b/summaries/exploding_brains.tex index f2af8d9..7e7ae77 100644 --- a/summaries/exploding_brains.tex +++ b/summaries/exploding_brains.tex @@ -7,7 +7,7 @@ \subsection{Exploding brains in Julia} \authors{\"Omer Faruk G\"ulban, % Leonardo Muller-Rodriguez} -Particle simulations are used to generate visual effects (in movies, games, etc.). In this project, we explore how we can use magnetic resonance imaging (MRI) data to generate interesting visual effects by using (2D) particle simulations. We highlight that, historically, we were first inspired by a detailed blog post (\texttt{\url{https://nialltl.neocities.org/articles/mpm_guide.html}}) on the material point method \citep{Jiang1965, Love2006, Stomakhin2013a}. Our aim in Brainhack 2022 is to convert our previous progress in Python programming language to Julia. The reason why we have moved to Julia language is because it has convenient parallelization methods that are easy to implement while giving immediately speeding-up the particle simulations. +Particle simulations are used to generate visual effects (in movies, games, etc.). In this project, we explore how we can use magnetic resonance imaging (MRI) data to generate interesting visual effects by using (2D) particle simulations. We highlight that, historically, we were first inspired by a detailed blog post (\texttt{\url{https://nialltl.neocities.org/articles/mpm_guide.html}}) on the material point method\citep{Jiang1965, Love2006, Stomakhin2013a}. Our aim in Brainhack 2022 is to convert our previous progress in Python programming language to Julia. The reason why we have moved to Julia language is because it has convenient parallelization methods that are easy to implement while giving immediately speeding-up the particle simulations. ----------------------------------- diff --git a/summaries/flux.tex b/summaries/flux.tex index 4d77523..35c01ab 100644 --- a/summaries/flux.tex +++ b/summaries/flux.tex @@ -8,13 +8,13 @@ \subsection{FLUX: A pipeline for MEG analysis and beyond} Tara Ghafari, % Ole Jensen} -FLUX \citep{Ferrante2022} is an open-source pipeline for analysing magnetoencephalography (MEG) data. There are several toolboxes developed by the community to analyse MEG data. While these toolboxes provide a wealth of options for analyses, the many degrees of freedom pose a challenge for reproducible research. The aim of FLUX id to make the analyses steps and setting explicit. For instance, FLUX includes the state-of-the-art suggestions for noise cancellation as well as source modelling including pre-whitening and handling of rank-deficient data. +FLUX\citep{Ferrante2022} is an open-source pipeline for analysing magnetoencephalography (MEG) data. There are several toolboxes developed by the community to analyse MEG data. While these toolboxes provide a wealth of options for analyses, the many degrees of freedom pose a challenge for reproducible research. The aim of FLUX id to make the analyses steps and setting explicit. For instance, FLUX includes the state-of-the-art suggestions for noise cancellation as well as source modelling including pre-whitening and handling of rank-deficient data. -So far, the FLUX pipeline has been developed for MNE-Python \citep{Gramfort2014} and FieldTrip \citep{Oostenveld2011} with a focus on the MEGIN/Elekta system and it includes the associated documents as well as codes. +So far, the FLUX pipeline has been developed for MNE-Python\citep{Gramfort2014} and FieldTrip\citep{Oostenveld2011} with a focus on the MEGIN/Elekta system and it includes the associated documents as well as codes. The long-term plan for this pipeline is to make it more flexible and versatile to use. One key motivation for this is to facilitate open science with the larger aim of fostering the replicability of MEG research. These goals can be achieved in mid-term objectives, such as making the FLUX pipeline fully BIDS compatible and more automated. Another mid-term goal is to containerize the FLUX pipeline and the associated dependencies making it easier to use. Moreover, expanding the applications of this pipeline to other systems like MEG CTF, Optically Pumped Magnetometer (OPM) and EEG will be another crucial step in making FLUX a more generalized neurophysiological data analysis pipeline. -During the 2022 Brainhack, the team focused on incorporating the BIDS standard into the analysis pipeline using MNE_BIDS \citep{Appelhoff2019}. Consequently, an updated version of FLUX was released after the Brainhack meeting. +During the 2022 Brainhack, the team focused on incorporating the BIDS standard into the analysis pipeline using MNE_BIDS\citep{Appelhoff2019}. Consequently, an updated version of FLUX was released after the Brainhack meeting. \end{document} diff --git a/summaries/hyppomriqc.tex b/summaries/hyppomriqc.tex index 75a3d1b..af43397 100644 --- a/summaries/hyppomriqc.tex +++ b/summaries/hyppomriqc.tex @@ -10,13 +10,13 @@ \subsection{Evaluating discrepancies in hippocampal segmentation protocols using \subsubsection{Introduction} -Neuroimaging study results can vary significantly depending on the processing pipelines utilized by researchers to run their analyses, contributing to reproducibility issues. Researchers in the field are often faced with multiple choices of pipelines featuring similar capabilities, which may yield different results when applied to the same data \citep{carp2012plurality, kennedy2019everything}. While these reproducibility issues are increasingly well-documented in the literature, there is little existing research explaining why this inter-pipeline variability occurs or the factors contributing to it. In this project, we set out to understand what data-related factors impact the discrepancy between popular neuroimaging processing pipelines. +Neuroimaging study results can vary significantly depending on the processing pipelines utilized by researchers to run their analyses, contributing to reproducibility issues. Researchers in the field are often faced with multiple choices of pipelines featuring similar capabilities, which may yield different results when applied to the same data\citep{carp2012plurality, kennedy2019everything}. While these reproducibility issues are increasingly well-documented in the literature, there is little existing research explaining why this inter-pipeline variability occurs or the factors contributing to it. In this project, we set out to understand what data-related factors impact the discrepancy between popular neuroimaging processing pipelines. \subsubsection{Method} -The hippocampus is a structure commonly associated with memory function and dementia, and the left hippocampus is proposed to have higher discriminative power for identifying the progression of Alzheimer’s disease than the right hippocampus in multiple studies \citep{schuff2009mri}. We obtained left hippocampal volumes using three widely-used neuroimaging pipelines: FSL 5.0.9 \citep{patenaude2011bayesian}, FreeSurfer 6.0.0 \citep{fischl2012freesurfer}, and ASHS 2.0.0 PMC‐T1 atlas \citep{xie2019automated}. -We ran the three pipelines on T1 images from 15 subjects from the Prevent-AD Alzheimer’s dataset \citep{tremblay2021open}, composed of cognitively healthy participants between the ages of 55-88 years old that are at risk of developing Alzheimer's Disease. -We ran MRIQC \citep{esteban2017mriqc} - a tool for performing automatic quality control and extracting quality measures from MRI scans - on the 15 T1 scans and obtained Image Quality Metrics (IQMs) from them. We then found the correlations between the IQMs and the pairwise inter-pipeline discrepancy of the left hippocampal volumes for each T1 scan. +The hippocampus is a structure commonly associated with memory function and dementia, and the left hippocampus is proposed to have higher discriminative power for identifying the progression of Alzheimer’s disease than the right hippocampus in multiple studies\citep{schuff2009mri}. We obtained left hippocampal volumes using three widely-used neuroimaging pipelines: FSL 5.0.9\citep{patenaude2011bayesian}, FreeSurfer 6.0.0\citep{fischl2012freesurfer}, and ASHS 2.0.0 PMC‐T1 atlas\citep{xie2019automated}. +We ran the three pipelines on T1 images from 15 subjects from the Prevent-AD Alzheimer’s dataset\citep{tremblay2021open}, composed of cognitively healthy participants between the ages of 55-88 years old that are at risk of developing Alzheimer's Disease. +We ran MRIQC\citep{esteban2017mriqc} - a tool for performing automatic quality control and extracting quality measures from MRI scans - on the 15 T1 scans and obtained Image Quality Metrics (IQMs) from them. We then found the correlations between the IQMs and the pairwise inter-pipeline discrepancy of the left hippocampal volumes for each T1 scan. \begin{figure}[!h] \centering @@ -28,7 +28,7 @@ \subsubsection{Method} \subsubsection{Results} -We found that for The FSL-FreeSurfer and FSL-ASHs discrepancies, MRIQC’s EFC measure produced the highest correlation, of 0.69 and 0.64, respectively. The EFC “uses the Shannon entropy of voxel intensities as an indication of ghosting and blurring induced by head motion” \citep{MRIQCdoc}. No such correlations were found for the ASHS-FreeSurfer discrepancies. \Cref{fig:MRIQC-fig} shows a scatter plot of the discrepancies in left hippocampal volume and EFC IQM for each pipeline pairing. The preliminary results suggest that FSL’s hippocampal segmentation may be sensitive to head motion in T1 scans, leading to larger result discrepancies, but we require larger sample sizes to make meaningful conclusions. The code for our project can be found on GitHub at \href{https://github.com/jacobsanz97/Pipeline-Discrepancy-Exploration}{this link}. +We found that for The FSL-FreeSurfer and FSL-ASHs discrepancies, MRIQC’s EFC measure produced the highest correlation, of 0.69 and 0.64, respectively. The EFC “uses the Shannon entropy of voxel intensities as an indication of ghosting and blurring induced by head motion”\citep{MRIQCdoc}. No such correlations were found for the ASHS-FreeSurfer discrepancies. \Cref{fig:MRIQC-fig} shows a scatter plot of the discrepancies in left hippocampal volume and EFC IQM for each pipeline pairing. The preliminary results suggest that FSL’s hippocampal segmentation may be sensitive to head motion in T1 scans, leading to larger result discrepancies, but we require larger sample sizes to make meaningful conclusions. The code for our project can be found on GitHub at \href{https://github.com/jacobsanz97/Pipeline-Discrepancy-Exploration}{this link}. \subsubsection{Conclusion and Next Steps} diff --git a/summaries/metadata-community.tex b/summaries/metadata-community.tex index 5c4267c..db4c5a7 100644 --- a/summaries/metadata-community.tex +++ b/summaries/metadata-community.tex @@ -8,7 +8,7 @@ \subsection{Accelerating adoption of metadata standards for dataset descriptors} Felix Hoffstaedter, % Sebastian Urchs} -Thanks to efforts of the neuroimaging community, not least the brainhack community \citep{Gau2021}, datasets are increasingly shared on open data repositories like OpenNeuro \citep{Markiewicz2021-bf} using standards like BIDS \citep{Gorgolewski2016} for interoperability. As the amount of datasets and data repositories increases, we need to find better ways to search across them for samples that fit our research questions. In the same way that the wide adoption of BIDS makes data sharing and tool development easier, the wide adoption of consistent vocabulary for demographic, clinical and other sample metadata would make data search and integration easier. We imagine a future platform that allows cross dataset search and the pooling of data across studies. Efforts to establish such metadata standards have had some success in other communities \citep{Field2008-kw, Stang2010-nl}, but adoption in the neuroscience community so far has been slow. We have used the space of the brainhack to discuss which challenges are hindering wide adoption of metadata standards in the neuroimaging community and what could be done to accelerate it. +Thanks to efforts of the neuroimaging community, not least the brainhack community\citep{Gau2021}, datasets are increasingly shared on open data repositories like OpenNeuro\citep{Markiewicz2021-bf} using standards like BIDS\citep{Gorgolewski2016} for interoperability. As the amount of datasets and data repositories increases, we need to find better ways to search across them for samples that fit our research questions. In the same way that the wide adoption of BIDS makes data sharing and tool development easier, the wide adoption of consistent vocabulary for demographic, clinical and other sample metadata would make data search and integration easier. We imagine a future platform that allows cross dataset search and the pooling of data across studies. Efforts to establish such metadata standards have had some success in other communities\citep{Field2008-kw, Stang2010-nl}, but adoption in the neuroscience community so far has been slow. We have used the space of the brainhack to discuss which challenges are hindering wide adoption of metadata standards in the neuroimaging community and what could be done to accelerate it. We believe that an important social challenge for the wider adoption of metadata standards is that it is hard to demonstrate their value without a practical use case. We therefore think that rather than focusing on building better standards, in the short term we need to prioritize small, but functional demonstrations that help convey the value of these standards and focus on usability and ease of adoption. Having consistent names and format for even a few metadata variables like age, sex, and diagnosis already allows for interoperability and search across datasets. Selecting a single vocabulary that must be used for annotating e.g. diagnosis necessarily lacks some precision but avoids the need to align slightly different versions of the same terms. Accessible tools can be built to facilitate the annotation process of such a basic metadata standard. The best standard will be poorly adopted if there are no easy to use tools that implement it. Efforts like the neurobagel project (neurobagel.org/) are trying to implement this approach to demonstrate a simple working use case for cross dataset integration and search. Our goal is to use such simpler demonstrations to build awareness and create a community around the goal of consistent metadata adoption. diff --git a/summaries/narps-open-pipelines.tex b/summaries/narps-open-pipelines.tex index 1183f59..861f50d 100644 --- a/summaries/narps-open-pipelines.tex +++ b/summaries/narps-open-pipelines.tex @@ -13,7 +13,7 @@ \subsection{The NARPS Open Pipelines Project} The goal of the NARPS Open Pipelines Project is to provide a public codebase that reproduces the 70 pipelines chosen by the 70 teams of the NARPS study~\citep{botviniknezer2020}. The project is public and the code hosted on GitHub at~\url{https://github.com/Inria-Empenn/narps_open_pipelines}. -This project initially emerged from the idea of creating an open repository of fMRI data analysis pipelines (as used by researchers in the field) with the broader goal to study and better understand the impact of analytical variability. NARPS -- a many-analyst study in which 70 research teams were asked to analyze the same fMRI dataset with their favorite pipeline -- was identified as an ideal usecase as it provides a large array of pipelines created by different labs. In addition, all teams in NARPS provided extensive (textual) description of their pipelines using the COBIDAS~\citep{nichols2017} guidelines. All resulting statistic maps were shared on NeuroVault \citep{gorgolewski2015} and can be used to assess the success of the reproductions. +This project initially emerged from the idea of creating an open repository of fMRI data analysis pipelines (as used by researchers in the field) with the broader goal to study and better understand the impact of analytical variability. NARPS -- a many-analyst study in which 70 research teams were asked to analyze the same fMRI dataset with their favorite pipeline -- was identified as an ideal usecase as it provides a large array of pipelines created by different labs. In addition, all teams in NARPS provided extensive (textual) description of their pipelines using the COBIDAS~\citep{nichols2017} guidelines. All resulting statistic maps were shared on NeuroVault\citep{gorgolewski2015} and can be used to assess the success of the reproductions. At the OHBM Brainhack 2022, our goal was to improve the accessibility and reusability of the database, to facilitate new contributions and to reproduce more pipelines. We focused our efforts on the first two goals. By trying to install the computing environment of the database, contributors provided feedback on the instructions and on specific issues they faced during the installation. Two major improvements were made for the download of the necessary data: the original fMRI dataset and the original results (statistic maps stored in NeuroVault) were added as submodules to the GitHub repository. Finally, propositions were made to facilitate contributions: the possibility to use of the Giraffe toolbox~\citep{vanMourik2016} for contributors that are not familiar with NiPype~\citep{gorgolewski2017} and the creation of a standard template to reproduce a new pipeline. diff --git a/summaries/neurocausal.tex b/summaries/neurocausal.tex index 3d6201c..5396ae5 100644 --- a/summaries/neurocausal.tex +++ b/summaries/neurocausal.tex @@ -13,11 +13,11 @@ \subsection{NeuroCausal: Development of an Open Source Platform for the Storage, Pedro Pinheiro-Chagas, % Valentina Borghesani} -Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles \citep{Wilkinson2016}, standards \citep{Gorgolewski2016}, and ontologies \citep{poldrack_cognitive_2011}, as well as practices of meta-analysis\citep{dockes_neuroquery_2020, yarkoni_large-scale_2011} and data sharing \citep{gorgolewski2015}. However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed \citep{weber_functional_2010, siddiqi_causal_2022}. On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain \citep{price_evolution_2018}. Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce \citep{lariviere_enigma_2021}. +Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles\citep{Wilkinson2016}, standards\citep{Gorgolewski2016}, and ontologies\citep{poldrack_cognitive_2011}, as well as practices of meta-analysis\citep{dockes_neuroquery_2020, yarkoni_large-scale_2011} and data sharing\citep{gorgolewski2015}. However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed\citep{weber_functional_2010, siddiqi_causal_2022}. On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain\citep{price_evolution_2018}. Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce\citep{lariviere_enigma_2021}. With our project, NeuroCausal (https://neurocausal.github.io/), we aim to build an online platform and community that allows open sharing, storage, and synthesis of clinical (meta) data crucial for the development of modern, transdiagnostic, accessible, and replicable (i.e., FAIR: Findability, Accessibility, Interoperability, and Reusability) neuropsychology. The project is organized into two infrastructural stages: first, published peer-reviewed papers will be scrapped to collect already available (meta)data; second, our platform will allow direct uploading of clinical (de-identified) brain maps and their corresponding metadata. -The meta-analysis pipeline developed for the first stage of the project is inspired by and built upon the functionalities of NeuroQuery \citep{dockes_neuroquery_2020}, a successful large-scale neuroimaging meta-analytic platform. The first stage is the development of the code base allowing (1) downloading and filtering of neuropsychological papers, (2) extraction of reported brain lesion locations and their conversion into a common reference space (3) extraction of clinical and behavioral symptoms and their translation into a common annotation scheme, (4) learning the causal mapping between the neural and neuropsychological information gathered. +The meta-analysis pipeline developed for the first stage of the project is inspired by and built upon the functionalities of NeuroQuery\citep{dockes_neuroquery_2020}, a successful large-scale neuroimaging meta-analytic platform. The first stage is the development of the code base allowing (1) downloading and filtering of neuropsychological papers, (2) extraction of reported brain lesion locations and their conversion into a common reference space (3) extraction of clinical and behavioral symptoms and their translation into a common annotation scheme, (4) learning the causal mapping between the neural and neuropsychological information gathered. The second stage of the study aims at creating an online platform that allows for the direct uploading of clinical brain maps and their corresponding metadata. The platform will provide a basic automated preprocessing and a data-quality check pipeline, ensuring that all the ethical norms regarding patient privacy are met. The platform will automatically extract and synthesize key data to ultimately create probabilistic maps synthesizing transdiagnostic information on symptom-structure mapping, which will be dynamically updated as more data are gathered. diff --git a/summaries/physiopy-documentation.tex b/summaries/physiopy-documentation.tex index f21011f..ce02afa 100644 --- a/summaries/physiopy-documentation.tex +++ b/summaries/physiopy-documentation.tex @@ -20,9 +20,9 @@ \subsection{Physiopy - Documentation of Physiological Signal Best Practices} \label{fig:physiopy_beforeafter} \end{figure*} -Physiological data provides a representation of a subject’s internal state with respect to peripheral measures (i.e., heart rate, respiratory rate, etc.). Recording physiological measures is key to gain understanding of sources of signal variance in neuroimaging data that arise from outside of the brain \citep{chen2020}. This has been particularly useful for functional magnetic resonance imaging (fMRI) research, improving fMRI time-series model accuracy, while also improving real-time methods to monitor subjects during scanning \citep{bulte2017, caballero-gaudes2017}. +Physiological data provides a representation of a subject’s internal state with respect to peripheral measures (i.e., heart rate, respiratory rate, etc.). Recording physiological measures is key to gain understanding of sources of signal variance in neuroimaging data that arise from outside of the brain\citep{chen2020}. This has been particularly useful for functional magnetic resonance imaging (fMRI) research, improving fMRI time-series model accuracy, while also improving real-time methods to monitor subjects during scanning\citep{bulte2017, caballero-gaudes2017}. -Physiopy (\url{https://github.com/physiopy}) is an open and collaborative community, formed around the promotion of physiological data collection and incorporation in neuroimaging studies. Physiopy is focused on two main objectives. The first is the community-based development of tools for fMRI-based physiological processing. At the moment, there are three toolboxes: \textit{phys2bids} (physiological data storage and conversion to BIDS format \citep{phys2bids}, \textit{peakdet} (physiological data processing), and \textit{phys2denoise} (fMRI denoising). The second objective is advancing the general knowledge of physiological data collection in fMRI by hosting open sessions to discuss best practices of physiological data acquisition, preprocessing, and analysis, and promoting community involvement. Physiopy maintains documentation with best practices guidelines stemming from these joint discussions and recent literature. +Physiopy (\url{https://github.com/physiopy}) is an open and collaborative community, formed around the promotion of physiological data collection and incorporation in neuroimaging studies. Physiopy is focused on two main objectives. The first is the community-based development of tools for fMRI-based physiological processing. At the moment, there are three toolboxes: \textit{phys2bids} (physiological data storage and conversion to BIDS format\citep{phys2bids}, \textit{peakdet} (physiological data processing), and \textit{phys2denoise} (fMRI denoising). The second objective is advancing the general knowledge of physiological data collection in fMRI by hosting open sessions to discuss best practices of physiological data acquisition, preprocessing, and analysis, and promoting community involvement. Physiopy maintains documentation with best practices guidelines stemming from these joint discussions and recent literature. At the OHBM 2022 Brainhack, we aimed to improve our community documentation by expanding on best practices documentation, and gathering libraries of complementary open source software. This provides new users resources for learning about the process of physiological collection as well as links to already available resources.The short-term goal for the Brainhack was to prepare a common platform (and home) for our documentation and repositories. We prioritised fundamental upkeep and content expansion, adopting Markdown documents and GitHub hosting to minimise barriers for new contributors. Over the course of the Brainhack, and with the joint effort within three hubs (Glasgow, EMEA and Americas), we were able to improve the current community website by rethinking its structure and adding fundamental content relative to who we are, contributions, and updated best practices, such as creating home pages, easy to find and navigate contribution tabs, adding new information from community best practices discussions as well as links to relevant software and datasets. Additionally, we aggregated the information scattered across different repositories, allowing important information for both the community and new collaborators to be accessible in a single location. diff --git a/summaries/rba.tex b/summaries/rba.tex index 2aa014d..a1941b4 100644 --- a/summaries/rba.tex +++ b/summaries/rba.tex @@ -15,7 +15,7 @@ \subsubsection{Introduction} Human brain imaging data is massively multidimensional, yet current approaches to modelling functional brain responses entail the application of univariate inferences to each voxel separately. This leads to the multiple testing problem and unrealistic assumptions about the data such as artificial dichotomization (statistically significant or not) in result reporting. The traditional approach of massively univariate analysis assumes that no information is shared across the brain, effectively making a strong prior assumption of a uniform distribution of effect sizes, which is unrealistic given the connectivity of the human brain. The consequent requirement for multiple testing adjustments results in the \textit{calibration of statistical evidence} without considering the estimation of effect, leading to substantial information loss and an unnecessarily heavy penalty. -A more efficient approach to handling multiplicity focuses on the \textit{calibration of effect estimation} under a Bayesian multilevel modeling framework with a prior assumption of, for example, normality across space \citep{chenHandlingMultiplicityNeuroimaging2019}. The methodology has previously been implemented at the region level into the \texttt{AFNI} program \texttt{RBA} \citep{chen_sources_2022} using \texttt{Stan} through the \texttt{R} package \texttt{brms} \citep{burknerBrmsPackageBayesian2017}. We intend to achieve two goals in this project: +A more efficient approach to handling multiplicity focuses on the \textit{calibration of effect estimation} under a Bayesian multilevel modeling framework with a prior assumption of, for example, normality across space\citep{chenHandlingMultiplicityNeuroimaging2019}. The methodology has previously been implemented at the region level into the \texttt{AFNI} program \texttt{RBA}\citep{chen_sources_2022} using \texttt{Stan} through the \texttt{R} package \texttt{brms}\citep{burknerBrmsPackageBayesian2017}. We intend to achieve two goals in this project: \begin{enumerate}[label=(\roman*),nolistsep] \item To re-implement the methodology using PyMC improve the performance and flexibility of the modeling approach. \item To explore the possibility of analyzing voxel-level data using the multilevel modeling approach @@ -42,7 +42,7 @@ \subsubsection{Introduction} \label{fig:vox} \end{subfigure} -\caption{Validation of implementation using PyMC. (A) Posterior distributions of region-level behavior effects using \texttt{brms}. (B) Posterior distributions of region-level behavior effects using PyMC. (C) Posterior probabilities of the voxel-level effects being positive or negative, obtained using PyMC (plotted using Nilearn and overlaid in green with the NeuroQuery \citep{dockes_neuroquery_2020} map for the term ``emotional faces'').} +\caption{Validation of implementation using PyMC. (A) Posterior distributions of region-level behavior effects using \texttt{brms}. (B) Posterior distributions of region-level behavior effects using PyMC. (C) Posterior probabilities of the voxel-level effects being positive or negative, obtained using PyMC (plotted using Nilearn and overlaid in green with the NeuroQuery\citep{dockes_neuroquery_2020} map for the term ``emotional faces'').} \label{fig:rba} \end{figure*} \subsubsection{Implementation using PyMC} @@ -65,15 +65,15 @@ \subsubsection{Implementation using PyMC} \noindent In the model, $\mu_{rs}$ and $\sigma$ are the mean effect and standard deviation of the $s$th subject at the $r$th region, $\alpha_0$ and $\alpha_1$ are the overall mean and slope effect across all regions and subjects, $\theta_{0r}$ and $\theta_{1r}$ are the mean and slope effect at the $r$th region, $\eta_s$ is the mean effect of the $s$th subject, $\boldsymbol{S}_{2\times 2}$ is the variance-covariance of the mean and slope effect at the $r$th region, and $\tau$ is the standard deviation of the $s$th subject's effect $\eta_s$.% and the likelihood. -We implemented this model using the PyMC probabilistic programming framework \citep{Salvatier2016}, and the BAyesian Model-Building Interface (BAMBI) \citep{capretto2020}. The latter is a high-level interface that allows for specification of multilevel models using the formula notation that is also adopted by \texttt{brms}. A notebook describing the implementation is available \href{https://github.com/crnolan/pyrba}{here}. Our PyMC implementation was successfully validated: as shown in \Cref{fig:sub1} and \Cref{fig:sub2}, the posterior distributions from the PyMC implementation matched very well with their counterparts from the \texttt{brms} output.% are plotted in \Cref{fig:rba}. The results show +We implemented this model using the PyMC probabilistic programming framework \citep{Salvatier2016}, and the BAyesian Model-Building Interface (BAMBI) \citep{capretto2020}. The latter is a high-level interface that allows for specification of multilevel models using the formula notation that is also adopted by \texttt{brms}. A notebook describing the implementation is available \href{https://github.com/crnolan/pyrba}{here}. Our PyMC implementation was successfully validated: as shown in \Cref{fig:sub1} and \Cref{fig:sub2}, the posterior distributions from the PyMC implementation matched very well with their counterparts from the \texttt{brms} output.% are plotted in \Cref{fig:rba}. The results show \subsubsection{Extension of Bayesian multilevel modeling to voxel-level analysis} -After exploring the model on the region level, we wanted to see if recent computational and algorithmic advances allow us to employ the multilevel modeling framework on the voxel level as well. We obtained the OpenNeuro dataset \texttt{ds000117} \citep{wakeman_multi-subject_2015} from an experiment based on a face processing paradigm. Using \texttt{HALFpipe} \citep{waller_enigma_2022}, which is based on \texttt{fMRIPrep} \citep{esteban_fmriprep_2019}, the functional images were preprocessed with default settings and \emph{z}-statistic images were calculated for the contrast ``famous faces + unfamiliar faces versus 2 $\cdot$ scrambled faces''. +After exploring the model on the region level, we wanted to see if recent computational and algorithmic advances allow us to employ the multilevel modeling framework on the voxel level as well. We obtained the OpenNeuro dataset \texttt{ds000117}\citep{wakeman_multi-subject_2015} from an experiment based on a face processing paradigm. Using \texttt{HALFpipe}\citep{waller_enigma_2022}, which is based on \texttt{fMRIPrep}\citep{esteban_fmriprep_2019}, the functional images were preprocessed with default settings and \emph{z}-statistic images were calculated for the contrast ``famous faces + unfamiliar faces versus 2 $\cdot$ scrambled faces''. -We applied the same modeling framework and PyMC code as for region-based analysis, but without the explanatory varaiable $x$ in the model (\Cref{eq:bml}). To reduce computational and memory complexity, the \emph{z}-statistic images were downsampled to an isotropic resolution of 5mm. Using the GPU-based \texttt{nuts\char`_numpyro} sampler \citep{phan_composable_2019} with default settings, we were able to draw 2,000 posterior samples of the mean effect parameter for each of the 14,752 voxels. Sampling four chains took 23 minutes on four Nvidia Tesla V100 GPUs. +We applied the same modeling framework and PyMC code as for region-based analysis, but without the explanatory varaiable $x$ in the model (\Cref{eq:bml}). To reduce computational and memory complexity, the \emph{z}-statistic images were downsampled to an isotropic resolution of 5mm. Using the GPU-based \texttt{nuts\char`_numpyro} sampler\citep{phan_composable_2019} with default settings, we were able to draw 2,000 posterior samples of the mean effect parameter for each of the 14,752 voxels. Sampling four chains took 23 minutes on four Nvidia Tesla V100 GPUs. -The resulting posterior probabilities are shown in Figure~\Cref{fig:vox} overlaid with the meta-analytic map for the term ``emotional faces'' obtained from NeuroQuery \citep{dockes_neuroquery_2020}. The posterior probability map is consistent with meta-analytic results, showing strong statistical evidence in visual cortex and amygdala voxels. The posterior probability maps also reveal numerous other clusters of strong statistical evidence for both positive and negative effects. +The resulting posterior probabilities are shown in Figure~\Cref{fig:vox} overlaid with the meta-analytic map for the term ``emotional faces'' obtained from NeuroQuery\citep{dockes_neuroquery_2020}. The posterior probability map is consistent with meta-analytic results, showing strong statistical evidence in visual cortex and amygdala voxels. The posterior probability maps also reveal numerous other clusters of strong statistical evidence for both positive and negative effects. This implementation extension shows that large multilevel models are approaching feasibility, suggesting an exciting new avenue for statistical analysis of neuroimaging data. Next steps will be to investigate how to interpret and report these posterior maps, and to try more complex models that include additional model terms.