diff --git a/main.tex b/main.tex index a2da02f..d397cc5 100644 --- a/main.tex +++ b/main.tex @@ -289,11 +289,11 @@ \section*{Introduction} The Organisation of Human Brain Mapping BrainHack (shortened to OHBM -Brainhack in the article) is a yearly satellite event of the main OHBM +Brainhack herein) is a yearly satellite event of the main OHBM meeting, organised by the Open Science Special Interest Group following the model of Brainhack hackathons\citep{Gau2021}. Where other hackathons set up a competitive environment based on -outperforming other participants' projects, Brainhacks fosters a +outperforming other participants' projects, Brainhacks foster a collaborative environment in which participants can freely collaborate and exchange ideas within and between projects. @@ -389,7 +389,7 @@ \section{Traintrack} replaced tutorial lectures in the previous editions with curated online educational contents, released them prior to the main event, and attempted to integrate them with the hacktrack projects. This format -also provides more time (i.e. schedule) and space (i.e. minimising large +also provides more time (i.e.\ schedule) and space (i.e.\ minimising large space not used for hacking) for attendees to self-organise. Participants were encouraged to form study groups on five suggested topics: 1) setting up your system for analysis 2) python for data analysis, 3) @@ -471,8 +471,13 @@ \section{Platforms, website, and IT} \section{Project Reports} -In total, 23 projects were presented at the Brainhack, of which 14 submitted a written -report. +The peculiar nature of a Brainhack\citep{Gau2021} reflects in the nature of the projects developed during the event, that can span very different types of tasks. +While most projects feature more \'hackathon-style\' software development, in the form of improving software integration (\Cref{sec:DLDI}), API refactoring (\Cref{sec:Neuroscout}), or creation of new toolboxes and platforms (\Cref{sec:NeuroCausal,sec:NARPS,sec:pymc}), the inclusion of newcomers and participants with less strong software development skills can foster projects oriented to user testing (\Cref{sec:DLC,sec:NARPS}) or documentation compilation (\Cref{sec:physiopy}). +The scientific scopes of Brainhacks were reflected in projects revolving around data exploration (\Cref{sec:AHEAD,sec:HyppoMRIQC}) or model development (\Cref{sec:pymc}), or adding aspects of open science practices (namely, the Brain Imaging Data Structure) to toolboxes (\Cref{sec:FLUX,sec:vasomosaic}). +Finally, fostering a collaborative environment and avoiding pitching projects against each others not only opens up the possibility for participants to fluidly move between different groups, but also to have projects which sole aim is supporting other projects (\Cref{sec:BHC}), learning new skills by having fun (\Cref{sec:explodingbrains}), or fostering discussions and conversations among participants to improve the adoption of open science practices (\Cref{sec:metadata}). + + +Following are the 14 submitted reports of the 23 projects presented at the OHBM Brainhack. \subfile{summaries/ahead-project.tex} \subfile{summaries/brainhack-cloud.tex} diff --git a/summaries/VASOMOSAIC.tex b/summaries/VASOMOSAIC.tex index 55e6f1e..4318452 100644 --- a/summaries/VASOMOSAIC.tex +++ b/summaries/VASOMOSAIC.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{MOSAIC for VASO fMRI} +\subsection{MOSAIC for VASO fMRI}\label{sec:vasomosaic} \authors{Renzo (Laurentius) Huber, % Remi Gau, % @@ -14,7 +14,7 @@ \subsection{MOSAIC for VASO fMRI} Vascular Space Occupancy (VASO) is a functional magnetic resonance imaging (fMRI) method that is used for high-resolution cortical layer-specific imaging\citep{Huber2021a}. Currently, the most popular sequence for VASO at modern SIEMENS scanners is the one by \textcite{Stirnberg2021a} from the DZNE in Bonn, which is employed at more than 30 research labs worldwide. This sequence concomitantly acquires fMRI BOLD and blood volume signals. In the SIEMENS' reconstruction pipeline, these two complementary fMRI contrasts are mixed together within the same time series, making the outputs counter-intuitive for users. Specifically: \begin{itemize} - \item The 'raw' NIfTI converted time-series are not BIDS compatible (see \href{https://github.com/bids-standard/bids-specification/issues/1001}{https://github.com/bids-standard/bids-specification/issues/1001}). + \item The \'raw\' NIfTI converted time-series are not BIDS compatible (see \url{https://github.com/bids-standard/bids-specification/issues/1001}). \item The order of odd and even BOLD and VASO image TRs is unprincipled, making the ordering dependent on the specific implementation of NIfTI converters. \end{itemize} @@ -34,8 +34,8 @@ \subsection{MOSAIC for VASO fMRI} \label{fig:VASOMOSAIC} \end{figure} -Furthermore, Remi Gau, generated a template dataset that exemplifies how one could to store layer-fMRI VASO data. This includes all the meta data for ‘raw’ and ‘derivatives’. Link to this VASO fMRI BIDS demo: \href{https://gin.g-node.org/RemiGau/ds003216/src/bids_demo}{https://gin.g-node.org/RemiGau/ds003216/src/bids\textunderscore demo}. +Furthermore, Remi Gau, generated a template dataset that exemplifies how one could to store layer-fMRI VASO data. This includes all the meta data for ‘raw’ and ‘derivatives’. Link to this VASO fMRI BIDS demo: \url{https://gin.g-node.org/RemiGau/ds003216/src/bids_demo}. -Acknowledgements: We thank Chris Rodgers for instructions on how to overwrite existing reconstruction binaries on the SIEMENS scanner without rebooting. We thank David Feinberg, Alex Beckett and Samantha Ma for helping in testing the new reconstruction binaries at the Feinbergatron scanner in Berkeley via remote scanning. We thank Maastricht University Faculty of Psychology and Neuroscience for supporting this project with 2.5 hours of 'development scan time'. +Acknowledgements: We thank Chris Rodgers for instructions on how to overwrite existing reconstruction binaries on the SIEMENS scanner without rebooting. We thank David Feinberg, Alex Beckett and Samantha Ma for helping in testing the new reconstruction binaries at the Feinbergatron scanner in Berkeley via remote scanning. We thank Maastricht University Faculty of Psychology and Neuroscience for supporting this project with 2.5 hours of \'development scan time\'. \end{document} diff --git a/summaries/ahead-project.tex b/summaries/ahead-project.tex index 8bd432a..4128d58 100644 --- a/summaries/ahead-project.tex +++ b/summaries/ahead-project.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{Exploring the AHEAD brains together} +\subsection{Exploring the AHEAD brains together}\label{sec:AHEAD} \authors{Alessandra Pizzuti, % Sebastian Dresbach, % diff --git a/summaries/brainhack-cloud.tex b/summaries/brainhack-cloud.tex index fa56496..00921c3 100644 --- a/summaries/brainhack-cloud.tex +++ b/summaries/brainhack-cloud.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{Brainhack Cloud} +\subsection{Brainhack Cloud}\label{sec:BHC} \authors{Steffen Bollmann, % Isil Poyraz Bilgin, % @@ -13,16 +13,16 @@ \subsection{Brainhack Cloud} Today’s neuroscientific research deals with vast amounts of electrophysiological, neuroimaging and behavioural data. The progress in the field is enabled by the widespread availability of powerful computing and storage resources. Cloud computing in particular offers the opportunity to flexibly scale resources and it enables global collaboration across institutions. However, cloud computing is currently not widely used in the neuroscience field, although it could provide important scientific, economical, and environmental gains considering its effect in collaboration and sustainability\citep{apon2015, OracleSustainabilty}. One problem is the availability of cloud resources for researchers, because Universities commonly only provide on-premise high performance computing resources. The second problem is that many researchers lack the knowledge on how to efficiently use cloud resources. This project aims to address both problems by providing free access to cloud resources for the brain imaging community and by providing targeted training and support. -A team of brainhack volunteers (https://brainhack.org/brainhack\_cloud/admins/team/) applied for Oracle Cloud Credits to support open-source projects in and around brainhack with cloud resources. The project was generously funded by Oracle Cloud for Research\citep{OracleResearch} with \$230,000.00 AUD from the 29th of January 2022 until the 28th of January 2024. To facilitate the uptake of cloud computing in the field, the team built several resources (https://brainhack.org/brainhack\_cloud/tutorials/) to lower the entry barriers for members of the Brainhack community. +A team of brainhack volunteers (\url{https://brainhack.org/brainhack_cloud/admins/team/}) applied for Oracle Cloud Credits to support open-source projects in and around brainhack with cloud resources. The project was generously funded by Oracle Cloud for Research\citep{OracleResearch} with \$230,000.00 AUD from the 29th of January 2022 until the 28th of January 2024. To facilitate the uptake of cloud computing in the field, the team built several resources (\url{https://brainhack.org/brainhack_cloud/tutorials/}) to lower the entry barriers for members of the Brainhack community. -During the 2022 Brainhack, the team gave a presentation to share the capabilities that cloud computing offers to the Brainhack community, how they can place their resource requests and where they can get help. In total 11 projects were onboarded to the cloud and supported in their specific use cases: One team utilised the latest GPU architecture to take part in the Anatomical Tracings of Lesions After Stroke Grand Challenge. Others developed continuous integration tests for their tools using for example a full Slurm HPC cluster in the cloud to test how their tool behaves in such an environment. Another group deployed the Neurodesk.org\citep{NeuroDesk} project on a Kubernetes cluster to make it available for a student cohort to learn about neuroimage processing and to get access to all neuroimaging tools via the browser. All projects will have access to these cloud resources until 2024 and we are continuously onboarding new projects onto the cloud (https://brainhack.org/brainhack\_cloud/docs/request/). +During the OHBM 2022 Brainhack, the team gave a presentation to share the capabilities that cloud computing offers to the Brainhack community, how they can place their resource requests and where they can get help. In total 11 projects were onboarded to the cloud and supported in their specific use cases: One team utilised the latest GPU architecture to take part in the Anatomical Tracings of Lesions After Stroke Grand Challenge. Others developed continuous integration tests for their tools using for example a full Slurm HPC cluster in the cloud to test how their tool behaves in such an environment. Another group deployed the Neurodesk.org\citep{NeuroDesk} project on a Kubernetes cluster to make it available for a student cohort to learn about neuroimage processing and to get access to all neuroimaging tools via the browser. All projects will have access to these cloud resources until 2024 and we are continuously onboarding new projects onto the cloud (\url{https://brainhack.org/brainhack_cloud/docs/request/}). The Brainhack Cloud team plans to run a series of training modules in various Brainhack events throughout the year to reach researchers from various backgrounds and increase their familiarity with the resources provided for the community while providing free and fair access to the computational resources. The training modules will cover how to use and access computing and storage resources (e.g., generating SSH keys), to more advanced levels covering the use of cloud native technology like software containers (e.g., Docker/Singularity), container orchestration (e.g., Kubernetes), object storage (e.g, S3), and infrastructure as code (e.g., Terraform). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{brainhack_cloud.png} - \caption{A team of brainhack volunteers, applied for Oracle Cloud Credits to support open source projects in and around brainhack with powerful cloud resources on the Oracle Cloud: https://brainhack.org/brainhack\_cloud/ + \caption{A team of brainhack volunteers, applied for Oracle Cloud Credits to support open source projects in and around brainhack with powerful cloud resources on the Oracle Cloud: \url{https://brainhack.org/brainhack_cloud/} } \label{fig:cloud} \end{figure} diff --git a/summaries/datalad-catalog.tex b/summaries/datalad-catalog.tex index 9e01e3f..0df4c74 100644 --- a/summaries/datalad-catalog.tex +++ b/summaries/datalad-catalog.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{DataLad Catalog} +\subsection{DataLad Catalog}\label{sec:DLC} \authors{Stephan Heunis, % Adina S. Wagner, % diff --git a/summaries/datalad-dataverse.tex b/summaries/datalad-dataverse.tex index 0c3c7d9..82d0252 100644 --- a/summaries/datalad-dataverse.tex +++ b/summaries/datalad-dataverse.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{DataLad-Dataverse integration} +\subsection{DataLad-Dataverse integration}\label{sec:DLDI} \authors{Benjamin Poldrack, % Jianxiao Wu, % diff --git a/summaries/exploding_brains.tex b/summaries/exploding_brains.tex index 7e7ae77..9fb17ff 100644 --- a/summaries/exploding_brains.tex +++ b/summaries/exploding_brains.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{Exploding brains in Julia} +\subsection{Exploding brains in Julia}\label{sec:explodingbrains} \authors{\"Omer Faruk G\"ulban, % Leonardo Muller-Rodriguez} diff --git a/summaries/flux.tex b/summaries/flux.tex index 35c01ab..2d2f920 100644 --- a/summaries/flux.tex +++ b/summaries/flux.tex @@ -2,13 +2,13 @@ \begin{document} -\subsection{FLUX: A pipeline for MEG analysis and beyond} +\subsection{FLUX: A pipeline for MEG analysis and beyond}\label{sec:FLUX} \authors{Oscar Ferrante, % Tara Ghafari, % Ole Jensen} -FLUX\citep{Ferrante2022} is an open-source pipeline for analysing magnetoencephalography (MEG) data. There are several toolboxes developed by the community to analyse MEG data. While these toolboxes provide a wealth of options for analyses, the many degrees of freedom pose a challenge for reproducible research. The aim of FLUX id to make the analyses steps and setting explicit. For instance, FLUX includes the state-of-the-art suggestions for noise cancellation as well as source modelling including pre-whitening and handling of rank-deficient data. +FLUX\citep{Ferrante2022} is an open-source pipeline for analysing magnetoencephalography (MEG) data. There are several toolboxes developed by the community to analyse MEG data. While these toolboxes provide a wealth of options for analyses, the many degrees of freedom pose a challenge for reproducible research. The aim of FLUX is to make the analyses steps and setting explicit. For instance, FLUX includes the state-of-the-art suggestions for noise cancellation as well as source modelling including pre-whitening and handling of rank-deficient data. So far, the FLUX pipeline has been developed for MNE-Python\citep{Gramfort2014} and FieldTrip\citep{Oostenveld2011} with a focus on the MEGIN/Elekta system and it includes the associated documents as well as codes. The long-term plan for this pipeline is to make it more flexible and versatile to use. One key motivation for this is to facilitate open science with the larger aim of fostering the replicability of MEG research. diff --git a/summaries/hyppomriqc.tex b/summaries/hyppomriqc.tex index af43397..57bcbeb 100644 --- a/summaries/hyppomriqc.tex +++ b/summaries/hyppomriqc.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{Evaluating discrepancies in hippocampal segmentation protocols using automatic prediction of MRI quality (MRIQC)} +\subsection{Evaluating discrepancies in hippocampal segmentation protocols using automatic prediction of MRI quality (MRIQC)}\label{sec:HyppoMRIQC} \authors{Jacob Sanz-Robinson, % Mohammad Torabi, % diff --git a/summaries/metadata-community.tex b/summaries/metadata-community.tex index db4c5a7..d5e21fe 100644 --- a/summaries/metadata-community.tex +++ b/summaries/metadata-community.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{Accelerating adoption of metadata standards for dataset descriptors} +\subsection{Accelerating adoption of metadata standards for dataset descriptors}\label{sec:metadata} \authors{Cassandra Gould van Praag, % Felix Hoffstaedter, % @@ -10,7 +10,7 @@ \subsection{Accelerating adoption of metadata standards for dataset descriptors} Thanks to efforts of the neuroimaging community, not least the brainhack community\citep{Gau2021}, datasets are increasingly shared on open data repositories like OpenNeuro\citep{Markiewicz2021-bf} using standards like BIDS\citep{Gorgolewski2016} for interoperability. As the amount of datasets and data repositories increases, we need to find better ways to search across them for samples that fit our research questions. In the same way that the wide adoption of BIDS makes data sharing and tool development easier, the wide adoption of consistent vocabulary for demographic, clinical and other sample metadata would make data search and integration easier. We imagine a future platform that allows cross dataset search and the pooling of data across studies. Efforts to establish such metadata standards have had some success in other communities\citep{Field2008-kw, Stang2010-nl}, but adoption in the neuroscience community so far has been slow. We have used the space of the brainhack to discuss which challenges are hindering wide adoption of metadata standards in the neuroimaging community and what could be done to accelerate it. -We believe that an important social challenge for the wider adoption of metadata standards is that it is hard to demonstrate their value without a practical use case. We therefore think that rather than focusing on building better standards, in the short term we need to prioritize small, but functional demonstrations that help convey the value of these standards and focus on usability and ease of adoption. Having consistent names and format for even a few metadata variables like age, sex, and diagnosis already allows for interoperability and search across datasets. Selecting a single vocabulary that must be used for annotating e.g. diagnosis necessarily lacks some precision but avoids the need to align slightly different versions of the same terms. Accessible tools can be built to facilitate the annotation process of such a basic metadata standard. The best standard will be poorly adopted if there are no easy to use tools that implement it. Efforts like the neurobagel project (neurobagel.org/) are trying to implement this approach to demonstrate a simple working use case for cross dataset integration and search. Our goal is to use such simpler demonstrations to build awareness and create a community around the goal of consistent metadata adoption. +We believe that an important social challenge for the wider adoption of metadata standards is that it is hard to demonstrate their value without a practical use case. We therefore think that rather than focusing on building better standards, in the short term we need to prioritize small, but functional demonstrations that help convey the value of these standards and focus on usability and ease of adoption. Having consistent names and format for even a few metadata variables like age, sex, and diagnosis already allows for interoperability and search across datasets. Selecting a single vocabulary that must be used for annotating e.g. diagnosis necessarily lacks some precision but avoids the need to align slightly different versions of the same terms. Accessible tools can be built to facilitate the annotation process of such a basic metadata standard. The best standard will be poorly adopted if there are no easy to use tools that implement it. Efforts like the neurobagel project (\url{neurobagel.org/}) are trying to implement this approach to demonstrate a simple working use case for cross dataset integration and search. Our goal is to use such simpler demonstrations to build awareness and create a community around the goal of consistent metadata adoption. Our long term goal is to use the awareness of the value of shared metadata standards to build a community to curate the vocabularies used for annotation. The initially small number of metadata variables will have to be iteratively extended through a community driven process to determine what fields should be standardized to serve concrete use cases. Rather than creating new vocabularies the goal should be to curate a list of existing ones that can be contributed to where terms are inaccurate or missing. The overall goal of such a community should be to build consensus on and maintain shared standards for the annotation of neuroimaging metadata that support search and integration of data for an ever more reproducible and generalizable neuroscience. diff --git a/summaries/narps-open-pipelines.tex b/summaries/narps-open-pipelines.tex index 861f50d..f3f2693 100644 --- a/summaries/narps-open-pipelines.tex +++ b/summaries/narps-open-pipelines.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{The NARPS Open Pipelines Project} +\subsection{The NARPS Open Pipelines Project}\label{sec:NARPS} \authors{Elodie Germani, % Arshitha Basavaraj, % diff --git a/summaries/neurocausal.tex b/summaries/neurocausal.tex index 5396ae5..3f6dbf6 100644 --- a/summaries/neurocausal.tex +++ b/summaries/neurocausal.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{NeuroCausal: Development of an Open Source Platform for the Storage, Sharing, Synthesis, and Meta-Analysis of Neuropsychological Data} +\subsection{NeuroCausal: Development of an Open Source Platform for the Storage, Sharing, Synthesis, and Meta-Analysis of Neuropsychological Data}\label{sec:NeuroCausal} \authors{Isil Poyraz Bilgin, % Francois Paugam, % @@ -15,7 +15,7 @@ \subsection{NeuroCausal: Development of an Open Source Platform for the Storage, Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles\citep{Wilkinson2016}, standards\citep{Gorgolewski2016}, and ontologies\citep{poldrack_cognitive_2011}, as well as practices of meta-analysis\citep{dockes_neuroquery_2020, yarkoni_large-scale_2011} and data sharing\citep{gorgolewski2015}. However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed\citep{weber_functional_2010, siddiqi_causal_2022}. On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain\citep{price_evolution_2018}. Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce\citep{lariviere_enigma_2021}. -With our project, NeuroCausal (https://neurocausal.github.io/), we aim to build an online platform and community that allows open sharing, storage, and synthesis of clinical (meta) data crucial for the development of modern, transdiagnostic, accessible, and replicable (i.e., FAIR: Findability, Accessibility, Interoperability, and Reusability) neuropsychology. The project is organized into two infrastructural stages: first, published peer-reviewed papers will be scrapped to collect already available (meta)data; second, our platform will allow direct uploading of clinical (de-identified) brain maps and their corresponding metadata. +With our project, NeuroCausal (\url{https://neurocausal.github.io/}), we aim to build an online platform and community that allows open sharing, storage, and synthesis of clinical (meta) data crucial for the development of modern, transdiagnostic, accessible, and replicable (i.e., FAIR: Findability, Accessibility, Interoperability, and Reusability) neuropsychology. The project is organized into two infrastructural stages: first, published peer-reviewed papers will be scrapped to collect already available (meta)data; second, our platform will allow direct uploading of clinical (de-identified) brain maps and their corresponding metadata. The meta-analysis pipeline developed for the first stage of the project is inspired by and built upon the functionalities of NeuroQuery\citep{dockes_neuroquery_2020}, a successful large-scale neuroimaging meta-analytic platform. The first stage is the development of the code base allowing (1) downloading and filtering of neuropsychological papers, (2) extraction of reported brain lesion locations and their conversion into a common reference space (3) extraction of clinical and behavioral symptoms and their translation into a common annotation scheme, (4) learning the causal mapping between the neural and neuropsychological information gathered. diff --git a/summaries/neuroscout.tex b/summaries/neuroscout.tex index 2d92726..049163f 100644 --- a/summaries/neuroscout.tex +++ b/summaries/neuroscout.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{Neuroscout: A platform for fast and flexible re-analysis of (naturalistic) fMRI studies} +\subsection{Neuroscout: A platform for fast and flexible re-analysis of (naturalistic) fMRI studies}\label{sec:Neuroscout} \authors{Alejandro De La Vega, % Roberta Rocca, % @@ -17,7 +17,7 @@ \subsection{Neuroscout: A platform for fast and flexible re-analysis of (natural Neuroscout is an end-to-end platform for analysis of naturalistic fMRI data designed to facilitate the adoption of robust and generalizable research practices. Neuroscout’s goal is to make it easy to analyze complex naturalistic fMRI datasets by providing an integrated platform for model specification and automated statistical modeling, reducing technical barriers. Importantly, Neuroscout is at its core a platform for reproducible analysis of fMRI data in general, and builds upon a set of open standards and specifications to ensure analyses are Findable, Accessible, Interoperable, and Reusable (FAIR). -In the OHBM Hackathon, we iterated on several important projects that substantially improved the general usability of the Neuroscout platform. First, we launched a revamped and unified documentation which links together all of the subcomponents of the Neuroscout platform (https://neuroscout.github.io/neuroscout-docs/). Second, we facilitated access to Neuroscout’s data sources by simplifying the design of Python API, and providing high-level utility functions for easy programmatic data queries. Third, we updated a list of candidate naturalistic and non-naturalistic datasets amenable for indexing by the Neuroscout platform, ensuring the platform stays up to date with the latest public datasets. +In the OHBM Hackathon, we iterated on several important projects that substantially improved the general usability of the Neuroscout platform. First, we launched a revamped and unified documentation which links together all of the subcomponents of the Neuroscout platform (\url{https://neuroscout.github.io/neuroscout-docs/}). Second, we facilitated access to Neuroscout’s data sources by simplifying the design of Python API, and providing high-level utility functions for easy programmatic data queries. Third, we updated a list of candidate naturalistic and non-naturalistic datasets amenable for indexing by the Neuroscout platform, ensuring the platform stays up to date with the latest public datasets. In addition, important work was done to expand the types of analyses that can be performed with naturalistic data in the Neuroscout platform. Notably, progress was made in integrating Neuroscout with Himalaya, a library for efficient voxel wide encoding modeling with support for banded penalized regression. In addition, a custom image-on-scalar analysis was prototyped on naturalistic stimuli via the publicly available naturalistic features available in the Neuroscout API. Finally, we also worked to improve documentation and validation for BIDS StatsModels, a specification for neuroimaging statistical models which underlies Neuroscout’s automated model fitting pipeline. diff --git a/summaries/physiopy-documentation.tex b/summaries/physiopy-documentation.tex index ce02afa..2161794 100644 --- a/summaries/physiopy-documentation.tex +++ b/summaries/physiopy-documentation.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{Physiopy - Documentation of Physiological Signal Best Practices} +\subsection{Physiopy - Documentation of Physiological Signal Best Practices}\label{sec:physiopy} \authors{Sarah E. Goodale, % Ines Esteves, % diff --git a/summaries/rba.tex b/summaries/rba.tex index a1941b4..0f61fb5 100644 --- a/summaries/rba.tex +++ b/summaries/rba.tex @@ -2,7 +2,7 @@ \begin{document} -\subsection{Handling multiple testing problem through effect calibration: implementation using PyMC} +\subsection{Handling multiple testing problem through effect calibration: implementation using PyMC}\label{sec:pymc} \authors{% Lea Waller, %