Skip to content

Commit

Permalink
More to the switch to AMA
Browse files Browse the repository at this point in the history
  • Loading branch information
smoia committed Jan 8, 2024
1 parent bfceba7 commit 9fa3b82
Show file tree
Hide file tree
Showing 14 changed files with 33 additions and 33 deletions.
4 changes: 2 additions & 2 deletions main.tex
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@ \section*{Introduction}
The Organisation of Human Brain Mapping BrainHack (shortened to OHBM
Brainhack in the article) is a yearly satellite event of the main OHBM
meeting, organised by the Open Science Special Interest Group following
the model of Brainhack hackathons \citep{Gau2021}.
the model of Brainhack hackathons\citep{Gau2021}.
Where other hackathons set up a competitive environment based on
outperforming other participants' projects, Brainhacks fosters a
collaborative environment in which participants can freely collaborate
Expand Down Expand Up @@ -503,7 +503,7 @@ \section{Conclusion and future directions}

The organisation managed to provide a positive onsite environment,
aiming to allow participants to self-organise in the spirit of the
Brainhack \citep{Gau2021}, with plenty of moral - and physical - support.
Brainhack\citep{Gau2021}, with plenty of moral - and physical - support.

The technical setup, based on heavy automatisation flow to allow project
submission to be streamlined, was a fundamental help to the organisation
Expand Down
4 changes: 2 additions & 2 deletions summaries/VASOMOSAIC.tex
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ \subsection{MOSAIC for VASO fMRI}
\"Omer Faruk G\"ulban, %
Benedikt A. Poser}

Vascular Space Occupancy (VASO) is a functional magnetic resonance imaging (fMRI) method that is used for high-resolution cortical layer-specific imaging \citep{Huber2021a}. Currently, the most popular sequence for VASO at modern SIEMENS scanners is the one by \textcite{Stirnberg2021a} from the DZNE in Bonn, which is employed at more than 30 research labs worldwide. This sequence concomitantly acquires fMRI BOLD and blood volume signals. In the SIEMENS' reconstruction pipeline, these two complementary fMRI contrasts are mixed together within the same time series, making the outputs counter-intuitive for users. Specifically:
Vascular Space Occupancy (VASO) is a functional magnetic resonance imaging (fMRI) method that is used for high-resolution cortical layer-specific imaging\citep{Huber2021a}. Currently, the most popular sequence for VASO at modern SIEMENS scanners is the one by \textcite{Stirnberg2021a} from the DZNE in Bonn, which is employed at more than 30 research labs worldwide. This sequence concomitantly acquires fMRI BOLD and blood volume signals. In the SIEMENS' reconstruction pipeline, these two complementary fMRI contrasts are mixed together within the same time series, making the outputs counter-intuitive for users. Specifically:

\begin{itemize}
\item The 'raw' NIfTI converted time-series are not BIDS compatible (see \href{https://github.com/bids-standard/bids-specification/issues/1001}{https://github.com/bids-standard/bids-specification/issues/1001}).
Expand All @@ -21,7 +21,7 @@ \subsection{MOSAIC for VASO fMRI}

Workarounds with 3D distortion correction, results in interpolation artifacts. Alternative workarounds without MOSAIC decorators result in unnecessarily large data sizes.

In the previous Brainhack \citep{Gau2021}, we extended the existing 3D-MOSAIC functor that was previously developed by Benedikt Poser and Philipp Ehses. This functor had been previously used to sort volumes of images by dimensions of echo-times, by RF-channels, and by magnitude and phase signals. In this Brainhack, we successfully extended and validated this functor to also support the dimensionality of SETs (that is representing BOLD and VASO contrast).
In the previous Brainhack\citep{Gau2021}, we extended the existing 3D-MOSAIC functor that was previously developed by Benedikt Poser and Philipp Ehses. This functor had been previously used to sort volumes of images by dimensions of echo-times, by RF-channels, and by magnitude and phase signals. In this Brainhack, we successfully extended and validated this functor to also support the dimensionality of SETs (that is representing BOLD and VASO contrast).

We are happy to share the compiled SIEMENS ICE (Image Calculation Environment) functor that does this sorting. Current VASO users, who want to upgrade their reconstruction pipeline to get the MOSAIC sorting feature too, can reach out to Renzo Huber ([email protected]) or R\"udiger Stirnberg ([email protected]).

Expand Down
4 changes: 2 additions & 2 deletions summaries/ahead-project.tex
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@ \subsubsection{Introduction}

\subsubsection{Results}

Visualization and annotation of large neuroimaging data sets can be challenging, in particular for collaborative data exploration. Here we tested two different infrastructures: BrainBox \url{https://brainbox.pasteur.fr/}, a web-based visualization and annotation tool for collaborative manual delineation of brain MRI data, see e.g. \citep{heuer_evolution_2019}, and Dandi Archive \url{https://dandiarchive.org/}, an online repository of microscopy data with links to Neuroglancer \url{https://github.com/google/neuroglancer}. While Brainbox could not handle the high resolution data well, Neuroglancer visualization was successful after conversion to the Zarr microscopy format (\Cref{fig:ahead}A).
Visualization and annotation of large neuroimaging data sets can be challenging, in particular for collaborative data exploration. Here we tested two different infrastructures: BrainBox \url{https://brainbox.pasteur.fr/}, a web-based visualization and annotation tool for collaborative manual delineation of brain MRI data, see e.g.\citep{heuer_evolution_2019}, and Dandi Archive \url{https://dandiarchive.org/}, an online repository of microscopy data with links to Neuroglancer \url{https://github.com/google/neuroglancer}. While Brainbox could not handle the high resolution data well, Neuroglancer visualization was successful after conversion to the Zarr microscopy format (\Cref{fig:ahead}A).

To help users explore the original high-resolution microscopy sections, we also built a python notebook to automatically query the stains around a given MNI coordinate using the Nighres toolbox~\citep{huntenburg_nighres_2018} (\Cref{fig:ahead}B).

For the cortical profile analysis we restricted our analysis on S1 (BA3b) as a part of the somato-motor area from one hemisphere of an individual human brain. S1 is rather thin (\(\sim\)2mm) and it has a highly myelinated layer 4 (see arrow \Cref{fig:ahead}C). In a future step, we are aiming to characterize differences between S1 (BA3b) and M1 (BA4). For now, we used the MRI-quantitative-R1 contrast to define, segment the region of interest and compute cortical depth measurement. In ITK-SNAP \citep{Yushkevich2006} we defined the somato-motor area by creating a spherical mask (radius 16.35mm) around the ‘hand knob’ in M1. To improve the intensity homogeneity of the qMRI-R1 images, we ran a bias field correction (N4BiasFieldCorrection, \citep{Cox1996}). Tissue segmentation was restricted to S1 and was obtained by combining four approaches: (i) fsl-fast \citep{Smith2004} for initial tissues probability map, (ii) semi-automatic histogram fitting in ITK-SNAP, (iii) Segmentator \citep{Gulban2018}, and (iv) manual editing. We used the LN2\_LAYERS program from LAYNII open source software \citep{Huber2021} to compute the equi-volume cortical depth measurements for the gray matter. Finally, we evaluated cortical depth profiles for three quantitative MRI contrasts (R1, R2, proton density) and three microscopy contrasts (thionin, bieloschowsky, parvalbumin) by computing a voxel-wise 2D histogram of image intensity (\Cref{fig:ahead}C). Some challenges are indicated by arrows 2 and 3 in the lower part of \Cref{fig:ahead}C.
For the cortical profile analysis we restricted our analysis on S1 (BA3b) as a part of the somato-motor area from one hemisphere of an individual human brain. S1 is rather thin (\(\sim\)2mm) and it has a highly myelinated layer 4 (see arrow \Cref{fig:ahead}C). In a future step, we are aiming to characterize differences between S1 (BA3b) and M1 (BA4). For now, we used the MRI-quantitative-R1 contrast to define, segment the region of interest and compute cortical depth measurement. In ITK-SNAP\citep{Yushkevich2006} we defined the somato-motor area by creating a spherical mask (radius 16.35mm) around the ‘hand knob’ in M1. To improve the intensity homogeneity of the qMRI-R1 images, we ran a bias field correction (N4BiasFieldCorrection,\citep{Cox1996}). Tissue segmentation was restricted to S1 and was obtained by combining four approaches: (i) fsl-fast\citep{Smith2004} for initial tissues probability map, (ii) semi-automatic histogram fitting in ITK-SNAP, (iii) Segmentator\citep{Gulban2018}, and (iv) manual editing. We used the LN2\_LAYERS program from LAYNII open source software\citep{Huber2021} to compute the equi-volume cortical depth measurements for the gray matter. Finally, we evaluated cortical depth profiles for three quantitative MRI contrasts (R1, R2, proton density) and three microscopy contrasts (thionin, bieloschowsky, parvalbumin) by computing a voxel-wise 2D histogram of image intensity (\Cref{fig:ahead}C). Some challenges are indicated by arrows 2 and 3 in the lower part of \Cref{fig:ahead}C.

From this Brainhack project, we conclude that the richness of the data set must be exploited from multiple points of view, from enhancing the integration of MRI with microscopy data in visualization software to providing optimized multi-contrast and multi-modality data analysis pipeline for high-resolution brain regions.

Expand Down
6 changes: 3 additions & 3 deletions summaries/brainhack-cloud.tex
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@ \subsection{Brainhack Cloud}
Samuel Guay, %
Johanna Bayer}

Today’s neuroscientific research deals with vast amounts of electrophysiological, neuroimaging and behavioural data. The progress in the field is enabled by the widespread availability of powerful computing and storage resources. Cloud computing in particular offers the opportunity to flexibly scale resources and it enables global collaboration across institutions. However, cloud computing is currently not widely used in the neuroscience field, although it could provide important scientific, economical, and environmental gains considering its effect in collaboration and sustainability \citep{apon2015, OracleSustainabilty}. One problem is the availability of cloud resources for researchers, because Universities commonly only provide on-premise high performance computing resources. The second problem is that many researchers lack the knowledge on how to efficiently use cloud resources. This project aims to address both problems by providing free access to cloud resources for the brain imaging community and by providing targeted training and support.
Today’s neuroscientific research deals with vast amounts of electrophysiological, neuroimaging and behavioural data. The progress in the field is enabled by the widespread availability of powerful computing and storage resources. Cloud computing in particular offers the opportunity to flexibly scale resources and it enables global collaboration across institutions. However, cloud computing is currently not widely used in the neuroscience field, although it could provide important scientific, economical, and environmental gains considering its effect in collaboration and sustainability\citep{apon2015, OracleSustainabilty}. One problem is the availability of cloud resources for researchers, because Universities commonly only provide on-premise high performance computing resources. The second problem is that many researchers lack the knowledge on how to efficiently use cloud resources. This project aims to address both problems by providing free access to cloud resources for the brain imaging community and by providing targeted training and support.

A team of brainhack volunteers (https://brainhack.org/brainhack\_cloud/admins/team/) applied for Oracle Cloud Credits to support open-source projects in and around brainhack with cloud resources. The project was generously funded by Oracle Cloud for Research \citep{OracleResearch} with \$230,000.00 AUD from the 29th of January 2022 until the 28th of January 2024. To facilitate the uptake of cloud computing in the field, the team built several resources (https://brainhack.org/brainhack\_cloud/tutorials/) to lower the entry barriers for members of the Brainhack community.
A team of brainhack volunteers (https://brainhack.org/brainhack\_cloud/admins/team/) applied for Oracle Cloud Credits to support open-source projects in and around brainhack with cloud resources. The project was generously funded by Oracle Cloud for Research\citep{OracleResearch} with \$230,000.00 AUD from the 29th of January 2022 until the 28th of January 2024. To facilitate the uptake of cloud computing in the field, the team built several resources (https://brainhack.org/brainhack\_cloud/tutorials/) to lower the entry barriers for members of the Brainhack community.

During the 2022 Brainhack, the team gave a presentation to share the capabilities that cloud computing offers to the Brainhack community, how they can place their resource requests and where they can get help. In total 11 projects were onboarded to the cloud and supported in their specific use cases: One team utilised the latest GPU architecture to take part in the Anatomical Tracings of Lesions After Stroke Grand Challenge. Others developed continuous integration tests for their tools using for example a full Slurm HPC cluster in the cloud to test how their tool behaves in such an environment. Another group deployed the Neurodesk.org \citep{NeuroDesk} project on a Kubernetes cluster to make it available for a student cohort to learn about neuroimage processing and to get access to all neuroimaging tools via the browser. All projects will have access to these cloud resources until 2024 and we are continuously onboarding new projects onto the cloud (https://brainhack.org/brainhack\_cloud/docs/request/).
During the 2022 Brainhack, the team gave a presentation to share the capabilities that cloud computing offers to the Brainhack community, how they can place their resource requests and where they can get help. In total 11 projects were onboarded to the cloud and supported in their specific use cases: One team utilised the latest GPU architecture to take part in the Anatomical Tracings of Lesions After Stroke Grand Challenge. Others developed continuous integration tests for their tools using for example a full Slurm HPC cluster in the cloud to test how their tool behaves in such an environment. Another group deployed the Neurodesk.org\citep{NeuroDesk} project on a Kubernetes cluster to make it available for a student cohort to learn about neuroimage processing and to get access to all neuroimaging tools via the browser. All projects will have access to these cloud resources until 2024 and we are continuously onboarding new projects onto the cloud (https://brainhack.org/brainhack\_cloud/docs/request/).

The Brainhack Cloud team plans to run a series of training modules in various Brainhack events throughout the year to reach researchers from various backgrounds and increase their familiarity with the resources provided for the community while providing free and fair access to the computational resources. The training modules will cover how to use and access computing and storage resources (e.g., generating SSH keys), to more advanced levels covering the use of cloud native technology like software containers (e.g., Docker/Singularity), container orchestration (e.g., Kubernetes), object storage (e.g, S3), and infrastructure as code (e.g., Terraform).

Expand Down
4 changes: 2 additions & 2 deletions summaries/datalad-catalog.tex
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@ \subsection{DataLad Catalog}
Remi Gau, %
Yaroslav O. Halchenko}

The importance and benefits of making research data Findable, Accessible, Interoperable, and Reusable are clear \citep{Wilkinson2016}. But of equal importance is our ethical and legal obligations to protect the personal data privacy of research participants. So we are struck with this apparent contradiction: how can we share our data openly…yet keep it secure and protected?
The importance and benefits of making research data Findable, Accessible, Interoperable, and Reusable are clear\citep{Wilkinson2016}. But of equal importance is our ethical and legal obligations to protect the personal data privacy of research participants. So we are struck with this apparent contradiction: how can we share our data openly…yet keep it secure and protected?

To address this challenge: structured, linked, and machine-readable metadata presents a powerful opportunity. Metadata provides not only high-level information about our research data (such as study and data acquisition parameters) but also the descriptive aspects of each file in the dataset: such as file paths, sizes, and formats. With this metadata, we can create an abstract representation of the full dataset that is separate from the actual data content. This means that the content can be stored securely, while we openly share the metadata to make our work more FAIR.

In practice, the distributed data management system DataLad \citep{Halchenko2021} and its extensions for metadata handling and catalog generation are capable of delivering such solutions. \texttt{datalad} (github.com/datalad/datalad) can be used for decentralised management of data as lightweight, portable and extensible representations. \texttt{datalad-metalad} (github.com/datalad/datalad-metalad) can extract structured high- and low-level metadata and associate it with these datasets or with individual files. And at the end of the workflow, \texttt{datalad-catalog} (\url{github.com/datalad/datalad-catalog}) can turn the structured metadata into a user-friendly data browser.
In practice, the distributed data management system DataLad\citep{Halchenko2021} and its extensions for metadata handling and catalog generation are capable of delivering such solutions. \texttt{datalad} (github.com/datalad/datalad) can be used for decentralised management of data as lightweight, portable and extensible representations. \texttt{datalad-metalad} (github.com/datalad/datalad-metalad) can extract structured high- and low-level metadata and associate it with these datasets or with individual files. And at the end of the workflow, \texttt{datalad-catalog} (\url{github.com/datalad/datalad-catalog}) can turn the structured metadata into a user-friendly data browser.

This hackathon project focused on the first round of user testing of the alpha version of \texttt{datalad-catalog}, by creating the first ever user-generated catalog (\url{https://jkosciessa.github.io/datalad_cat_test}). Further results included a string of new issues focusing on improving user experience, detailed notes on how to generate a catalog from scratch, and code additions to allow the loading of local web-assets so that any generated catalog can also be viewed offline.

Expand Down
Loading

0 comments on commit 9fa3b82

Please sign in to comment.