Skip to content

Commit

Permalink
Merge pull request #167 from FZJ-INM1-BDA/documentation
Browse files Browse the repository at this point in the history
Add released example datasets
  • Loading branch information
dickscheid authored May 23, 2023
2 parents 0d03ffb + 493ad5f commit f006924
Show file tree
Hide file tree
Showing 14 changed files with 116 additions and 15 deletions.
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ nav:
- 'tutorial.md'
- 'video.md'
- How to:
- NIfTI conversion: 'nifti_conversion.md'
- 'nifti_conversion.md'
- Contact us:
- Support: 'mailto:[email protected]?subject=[voluba]'
- GitHub Issues: 'https://github.com/FZJ-INM1-BDA/voluba/issues'
6 changes: 3 additions & 3 deletions user_docs/alignment.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ These steps can be performed and repeated in arbitrary order and are supported t

## Initial adjustment of coordinate axes orientations and scaling

The axis orientations and the voxel scaling can be modified using the **Transform Incoming Volume** dialog box, which also allows precise adjustment of position and rotation. The dialog is launched using the button with the brain icon on the left of the user interface:
The axis orientations and the voxel scaling can be modified using the **Transform incoming volume** dialog box, which also allows precise adjustment of position and rotation. The dialog is launched using the button with the brain icon on the left of the user interface:

![snippet](images/transformation.png)

Expand All @@ -21,7 +21,7 @@ Whenever you want to secure a transformation parameter, click on the lock. This

## Interactive translation and rotation

As mentioned above you can adjust the incoming volume's position and orientation by entering values into the **Transform Incoming Volume** dialog. Additionally, voluba allows direct manipulation of the input data's position and orientation using the mouse pointer. The position is changed by clicking & dragging the incoming volume in any of the orthogonal views.
As mentioned above you can adjust the incoming volume's position and orientation by entering values into the **Transform incoming volume** dialog. Additionally, voluba allows direct manipulation of the input data's position and orientation using the mouse pointer. The position is changed by clicking & dragging the incoming volume in any of the orthogonal views.
By pressing shift while clicking & dragging, a rotation is applied.

![gif](gifs/transform.gif)
Expand Down Expand Up @@ -63,7 +63,7 @@ In overlay mode, you enter two landmarks sequentially in the same window. They a

voluba tracks every step that you execute during the alignment process.
This includes translations, axis flips, rotations, scaling, and application of affine matrices that have been estimated from landmark pairs.
You can inspect and navigate the process using the history browser, which is accessible via the **history browser** button on the left:
You can inspect and navigate the process using the history browser, which is accessible via the **History browser** button on the left:

![snippet](images/history.png)

Expand Down
10 changes: 9 additions & 1 deletion user_docs/examples.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
# Datasets anchored with voluba

Axer, M., Poupon, C., & Costantini, I. (2020). Fiber structures of a human hippocampus based on joint DMRI, 3D-PLI, and TPFM acquisitions [Data set]. Human Brain Project Neuroinformatics Platform. [https://doi.org/10.25493/JQ30-E08](https://doi.org/10.25493/JQ30-E08)
Axer, M., Poupon, C., & Costantini, I. (2020). Fiber structures of a human hippocampus based on joint DMRI, 3D-PLI, and TPFM acquisitions [Data set]. Human Brain Project Neuroinformatics Platform. [https://doi.org/10.25493/JQ30-E08](https://doi.org/10.25493/JQ30-E08)
**Explore the dataset in** [![icon](images/EBRAINS_logo.png)](https://search.kg.ebrains.eu/instances/Dataset/b08a7dbc-7c75-4ce7-905b-690b2b1e8957)

Huysegoms, M., Bludau, S., Oliveira, S., Upschulte, E., Dickscheid, T., & Amunts, K. (2022). Cellular level 3D reconstructed volumes at 1µm resolution within the BigBrain occipital cortex (v1.0) [Data set]. EBRAINS. [https://doi.org/10.25493/K8Q7-CG9](https://doi.org/10.25493/K8Q7-CG9)
**Explore the dataset in** [![icon](images/EBRAINS_logo.png)](https://search.kg.ebrains.eu/instances/d71d369a-c401-4d7e-b97a-3fb78eed06c5)


Eckermann, M., & Salditt, T. (2022). 3d virtual histology of the human hippocampus based on phase-contrast computed-tomography [Data set]. Zenodo. [https://doi.org/10.5281/ZENODO.5658994](https://doi.org/10.5281/ZENODO.5658994)
**Explore the dataset in** [![icon](images/EBRAINS_logo.png)](https://search.kg.ebrains.eu/instances/7e065c31-aff6-4211-b777-dcb5050b4617)
2 changes: 1 addition & 1 deletion user_docs/extra.css
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ img[alt="logo"] {
}

img[alt="icon"] {
height: 25px;
height: 20px;
margin-right: 2px;
margin-left: 2px;
}
Expand Down
Binary file added user_docs/images/export.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added user_docs/images/load.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added user_docs/images/publish.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified user_docs/images/results.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added user_docs/images/save.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added user_docs/images/share.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added user_docs/images/view.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions user_docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,15 @@ The main idea behind voluba is to allow interactive alignment to microscopic res
Instead, voluba allows to upload your own volume of interest - which is typically significantly smaller - to a private space on the server, perform the interactive image alignment in your web browser, and retrieve the resulting parameters of the spatial alignment. The dataset will be linked to your ORCID id and not be shared or exposed to anybody else.

voluba offers a highly interactive workflow.
First, you log in with their ORCID or EBRAINS account to upload a dataset into your private working space for the anchoring process.
First, you log in with your ORCID or EBRAINS account to upload a dataset into your private working space for the anchoring process.
You can choose three different reference volumes: the microscopic resolution human brain model ["BigBrain"](https://search.kg.ebrains.eu/instances/Dataset/d07f9305-1e75-4548-a348-b155fb323d31), the Waxholm space template of the Sprague Dawley rat, and the Allen mouse brain.
The input volume is presented as a graphical overlay in a 3D view with orthogonal cross sections, and you can optimize the visualization by customizing contrast, brightness, colormaps, and intensity thresholds.
You can then directly manipulate the relative position and orientation of the input volume with your mouse point, and adjust of voxel scaling and axis orientations to obtain a rigid transformation.
Then, you can use voluba's 3D landmark editor to refine the transformation by specifying pairs of corresponding points between the volumes, further facilitated by an optional side-by-side view.
The landmarks enable a recalculation of the linear transformation matrix with additional degrees of freedom, including shearing.
Alignment actions can be performed and repeated in arbitrary order, supported through a history browser which allows to undo individual anchoring steps.

You can download the resulting transformation parameters in json format, open the aligned image can be in the atlas viewer [siibra-explorer](https://atlases.ebrains.eu/viewer/go/bigbrain) to see it in the anatomical context of the [EBRAINS human brain atlas](https//ebrains.eu/services/atlases), and also obtain a private URL to your anchored image that you can share with colleagues.
You can download the resulting transformation parameters in json format or export your uploaded image data to NIfTI format with an updated affine. Furthermore, the aligned image can be opened in the atlas viewer [siibra-explorer](https://atlases.ebrains.eu/viewer/go/bigbrain) to see it in the anatomical context of the [EBRAINS human brain atlas](https//ebrains.eu/services/atlases). To share your anchored image with colleagues, you can also obtain a private URL. If you would like to publish the transformation parameters of the anchored result to EBRAINS, voluba offers an automatic workflow for submission.

voluba uses [Vue](https://vuejs.org) for the reactive UI layer, [Vuex](https://vuex.vuejs.org/) for state management, and [Bootstrap 4](https://getbootstrap.com/docs/4.0) for layout.

Expand Down
101 changes: 97 additions & 4 deletions user_docs/results.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,103 @@ The matrix describes how a point in the coordinate system of the original input
!!! Info
voluba has used the transformation to display the image volume in superimposition with the reference volume - it did not actually modify the input dataset.

The result can be used in several ways, which are all accessible via the **Share/Save transformation results** button on the left side of the user interface:
The result can be used in several ways, which are all accessible via the **Use result** button on the left side of the user interface:

![snippet](images/results.png)

- Export the affine transformation parameters in a simple, plaintext JSON file for sharing and reuse. The stored transformation file can be re-imported in voluba and be utilized in other tools and workflows.
- Load your input dataset as a semi-transparent overlay in the interactive atlas viewer siibra-explorer, to see it in the comprehensive context of the brain atlas in reference space. This way, you can investigate the aligned image volume relative to all the data that is registered with the corresponding reference atlas.
- Generate a private URL to view and share the anchored volume as a remote dataset in neuroglancer-based image viewers.
* ![]()![icon](images/export.png): In case you uploaded your own image data: Download the image data with an updated NIfTI affine, that includes the applied transformations.
* ![]()![icon](images/save.png)/![icon](images/load.png): Export the affine transformation parameters in a simple, plaintext JSON file for sharing and reuse. The stored transformation file can be re-imported in voluba and be utilized in other tools and workflows.
* ![]()![icon](images/view.png): Load your input dataset as a semi-transparent overlay in the interactive atlas viewer siibra-explorer, to see it in the comprehensive context of the brain atlas in reference space. This way, you can investigate the aligned image volume relative to all the data that is registered with the corresponding reference atlas.
* ![]()![icon](images/publish.png): Publish the transformation parameters of your anchoring result to EBRAINS and optionally connect them to the corresponding EBRAINS dataset.
* ![]()![icon](images/share.png): Generate a private URL to view and share the anchored volume as a remote dataset in neuroglancer-based image viewers.

## Specification of the voluba transformation matrix

In this section you can find a detailed description about the transformation matrix produced by voluba and how you can convert it.
An example transformMatrix.json in voluba looks like this:

```
{
"incomingVolume": "Hippocampus",
"referenceVolume": "BigBrain (2015)",
"version": 1,
"@type": "https://voluba.apps.hbp.eu/@types/transform",
"transformMatrixInNm": [
[
0.03409423679113388,
0,
0,
11798058
],
[
0,
0.007783324457705021,
-0.04088926315307617,
5169337.5
],
[
0,
0.0331939272582531,
0.009587729349732399,
-30914778
],
[
0,
0,
0,
1
]
]
}
```

* `incoming volume`: anchored volume
* `reference volume`: reference space that `incoming volume` was anchored to
* `version`: schema version
* `@type`: schema to validate a transformMatrix.json
* `transformMatrixInNm`: $4\times4$ affine matrix, that encodes the transformations in nm, that the user applied to `incoming volume` in voluba

!!! Info
In the following section the `transformMatrixInNm` will be called _voluba affine matrix_ or `voluba_affine`.

### Relation between the updated NIfTI affine and voluba affine matrix

voluba allows you to download your uploaded volume again with an updated NIfTI affine matrix. This updated NIfTI affine includes the original NIfTI affine (voxel to physical) as well as the transformations applied in voluba (incoming to reference volume). To explain how the updated NIfTI affine is calculated, we need to consider the following:

voluba shows the reference and incoming volume superimposed in physical space. Therefore, both volumes are automatically transformed to physical space using their respective NIfTI affine matrix, when you start a new anchoring workflow. The transformations that you interactively apply afterwards, are encoded in the voluba affine matrix. Keep in mind, that this matrix does **not** include the initially applied NIfTI affine. Also, the voluba affine matrix uses nm as spatial unit. To convert the voluba affine into a NIfTI compatible affine, we need to apply the following conversion:

```python
# voluba to affine of different format
for n in range(3):
converted_voluba_affine[n][n] = voluba_affine[n][n] * voxelSize[n]
converted_voluba_affine[n][3] = voluba_affine[n][3] / spatialUnits.toNanometers() + (voxelSize[n] / 2)
```

* `voxelSize`: voxel size (respectively image spacing or resolution) of the incoming volume
* `spatialUnits.toNanometers()`: conversion factor from the spatial units (units of `voxelSize`) of an affine in the desired format to nm. For example, in NIfTI format the spatial unit is usually mm, so the factor equals $10^6$.

Concatenating the converted voluba affine matrix with the original NIfTI affine matrix of the uploaded volume, gives us the updated NIfTI affine. Thus, the relation is given as follows:

```python
updated_NIfTI_affine = converted_voluba_affine * uploaded_NIfTI_affine
```

### Convert a NIfTI affine to voluba affine matrix

Sometimes you may have already applied transformations to an image volume with a different tool. For example, you have a histology image volume as well as an associated segmentation, that you aligned to the histology. Now if you want to anchor both image volumes to a reference space, you can make use of the segmentation-to-histology-transformation. We do **not** recommend to do an alignment with voluba for each volume independently, as the segmentation will most probably not match the histology because the anchoring is done interactively. Instead, you can use voluba to align the histology volume to the reference space and adjust the resulting voluba affine matrix for the segmentation. For this, you will need to convert the segmentation-to-histology NIfTI affine into a voluba matrix:

```python
# NIfTI affine to voluba
for n in range(3):
voluba_affine[n][n] = affine_matrix[n][n] / voxelSize[n]
voluba_affine[n][3] = (affine_matrix[n][3] - (voxelSize[n] / 2)) * spatialUnits.toNanometers()
```

* `voxelSize`: voxel size (respectively image spacing or resolution) of the image volume
* `spatialUnits.toNanometers()`: conversion factor from the spatial units (units of `voxelSize`) of a NIfTI affine to nm. For example, in NIfTI format the spatial unit is usually mm, so the factor equals $10^6$.

After that, you can simply concatenate the voluba affine matrix of the histology volume with the converted segmentation-to-histology affine matrix. The resulting affine matrix can then be written into the transformMatrix.json for the segmentation volume and be used in voluba.

```python
voluba_seg = voluba_hist * converted_seg2hist
```
6 changes: 3 additions & 3 deletions user_docs/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ In this section, we will transform the Hippocampus volume to a (more or less) an

### Rough positioning

You can either perform all transformations with the **Transform Incoming Volume** dialog (brain icon on the left) or you can use your mouse for translation (Drag & Drop) and rotation (Shift & Drag).
You can either perform all transformations with the **Transform incoming volume** dialog (brain icon on the left) or you can use your mouse for translation (Drag & Drop) and rotation (Shift & Drag).

The first thing that strikes is, that the coronal view of the incoming Hippocampus volume is displayed in the axial view of the BigBrain template (bottom left). To transform this plane to the BigBrain coronal view, we need to rotate the volume around the x-axis by 90°.

Expand All @@ -39,7 +39,7 @@ Next, we will roughly move the volume to the BigBrain Hippocampus by translating

The current anchoring result already looks reasonable. But we can make it even more precise. To refine the transformation matrix, we now enter pairs of landmarks. For selecting landmarks, the two pane mode is very useful. You can activate it by clicking the **mode** button on the top right. Now you will see the reference template on the left and the incoming volume on the right side.

Open the **Edit Landmarks** dialog by clicking on the pin icon on the left. To add a landmark select the plus icon. You are now asked to add a landmark to the reference volume first. After that you can position a pin on the corresponding location in the incoming volume on the right. The more landmarks you add, the more accurate the recalculated transformation matrix will be. To start the calculation, select a transformation type and click the calculator icon. We choose `Affine` here.
Open the **Edit landmarks** dialog by clicking on the pin icon on the left. To add a landmark select the plus icon. You are now asked to add a landmark to the reference volume first. After that you can position a pin on the corresponding location in the incoming volume on the right. The more landmarks you add, the more accurate the recalculated transformation matrix will be. To start the calculation, select a transformation type and click the calculator icon. We choose `Affine` here.

![gif](gifs/tutorial_landmarks.gif)

Expand All @@ -49,7 +49,7 @@ By switching back to overlay mode, you can inspect the resulting alignment. If y

## Using the result

You can now for example download and reuse the parameters of the affine transformation matrix or view the anchoring result in the interactive atlas viewer siibra-explorer. Click on the **Save/Share Transformation Results** and select the brain icon to open siibra-explorer.
You can now for example download and reuse the parameters of the affine transformation matrix or view the anchoring result in the interactive atlas viewer siibra-explorer. Click on the **Use Result** and select the brain icon to open siibra-explorer.

![gif](gifs/tutorial_explorer.gif)

Expand Down

0 comments on commit f006924

Please sign in to comment.