Skip to content

Commit

Permalink
correct grammar
Browse files Browse the repository at this point in the history
  • Loading branch information
michal367 committed Apr 2, 2024
1 parent a18c55c commit d19baa7
Show file tree
Hide file tree
Showing 17 changed files with 172 additions and 167 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Yet Another Gamma Index Tool

YAGIT is a library written in C++ for performing efficient comparison of two images
containing radiation dose distributions using 2D, 2.5D and 3D gamma index concept.
YAGIT is a library written in C++ for performing efficient comparisons of two images
containing radiation dose distributions using the 2D, 2.5D, and 3D gamma index concepts.

## Documentation

Expand Down
59 changes: 31 additions & 28 deletions docs/source/algorithms.rst
Original file line number Diff line number Diff line change
@@ -1,48 +1,50 @@
Algorithms
==========

YAGIT implements two methods to calculate the gamma index. Wendling method is much faster than Classic method.
YAGIT implements two methods to calculate the gamma index: the classic method and the Wendling method.
The latter is significantly faster.


Classic method
--------------

Classic method was introduced with the paper that introduced gamma index concept by Low et al. [1]_ in 1998.
The classic method was introduced with the paper that introduced the gamma index concept by Low et al. [1]_ in 1998.
This is a brute-force method, and because of that, it is very slow. It is based on calculating
gamma function for each pair of points in reference image and evaluated image. This results in a time complexity of
:math:`O(n^k * n^k) = O(n^{2k})`, where :math:`n` is the size of each axis of the input data and :math:`k` is the
dimensionality of the data. For example 3D gamma index has :math:`O(n^6)` time complexity.
the gamma function for each pair of points in the reference image and the evaluated image.
This results in a time complexity of :math:`O(n^k * n^k) = O(n^{2k})`,
where :math:`n` is the size of each axis of the input data and :math:`k` is the dimensionality of the data.
For example, the 3D gamma index has :math:`O(n^6)` time complexity.


The subsequent steps of the algorithm are as follows:

.. rst-class:: list

#. For each reference point :math:`\vec{r_r}` calculate gamma index :math:`\gamma(\vec{r_r})`:
#. For each reference point :math:`\vec{r_r}`, calculate the gamma index :math:`\gamma(\vec{r_r})`:

A. For each evaluated point :math:`\vec{r_e}` calculate gamma function :math:`\Gamma(\vec{r_r}, \vec{r_e})`.
B. Choose the minimum value of calculated :math:`\Gamma(\vec{r_r}, \vec{r_e})`
A. For each evaluated point :math:`\vec{r_e}`, calculate the gamma function :math:`\Gamma(\vec{r_r}, \vec{r_e})`.
B. Choose the minimum value of the calculated :math:`\Gamma(\vec{r_r}, \vec{r_e})`
as the gamma index :math:`\gamma(\vec{r_r})`.

#. In the end, the gamma index image is obtained.


To speed up these calculations for 3D images, a 2.5D method was introduced, which involves calculating the gamma index
in 2D version for each slice of the 3D image separately. This results in a time complexity of
To speed up these calculations for 3D images, a 2.5D version was introduced, which involves calculating the gamma index
in a 2D version for each slice of the 3D image separately. This results in a time complexity of
:math:`O(n^3 * n^2) = O(n^5)`. Unfortunately, it returns less accurate results.

A commonly used step in classic method to obtain more accurate results is the interpolation of evaluated image
A commonly used step in the classic method to obtain more accurate results is the interpolation of the evaluated image
before initiating gamma index calculations. This results in larger input data, further increasing the computation time.
Low and Dempsey propose that spacing in the evaluated image should be less than or equal to :math:`\frac{1}{3}`
Low and Dempsey propose that the spacing in the evaluated image should be less than or equal to :math:`\frac{1}{3}`
of the DTA acceptance criterion [2]_.


Wendling method
---------------

In 2007, a new fast method was introduced by Wendling et al. [3]_.
It is based on limiting the searched area in the evaluated image to a circle (in the case of 2D and 2.5D version)
or a sphere (in the case of 3D version).
It is based on limiting the searched area in the evaluated image to a circle (in the case of the 2D and 2.5D versions)
or a sphere (in the case of the 3D version).
There is a higher probability of finding the minimum value of the gamma function in the vicinity of the reference point
than in some distant place.

Expand All @@ -52,23 +54,24 @@ than in some distant place.
:align: center
:scale: 133%

Limited area (circle) on a 2D evaluated image. Gamma function is calculated only for black points.
Limited area (circle) on a 2D evaluated image. The gamma function is calculated only for black points.
Values at those points are interpolated on-the-fly.

Such a limited area has a maximum search radius and step size, which are parameters of this method.
Such a limited area has a maximum search radius and a step size, which are parameters of this method.
Evaluated points, which are checked, are located on a grid with intervals equal to the step size,
and their distance from the center does not exceed the search radius.
Wendling et al. propose that the step size should be around :math:`\frac{1}{10}` of the DTA acceptance criterion
Wendling et al. propose that the step size should be :math:`\frac{1}{10}` of the DTA acceptance criterion
for precise results.
The values at points that do not exist in the evaluated image are determined on-the-fly using interpolation
(e.g. bilinear for 2D and trilinear for 3D).
(e.g., bilinear for 2D and trilinear for 3D).

Some optimizations of on-the-fly interpolation can be achieved by resampling the evaluated image onto the grid
of the reference image before initiating gamma index calculations.
However, YAGIT doesn't perform this step due to two reasons that can lead to less accurate results.
Firstly, this approach involves double interpolation -- first during the resampling process
and second during on-the-fly interpolation.
Second interpolation uses interpolated points from first interpolation, which can result in less accurate calculations.
Firstly, this approach involves double interpolation --
first during the resampling process and second during on-the-fly interpolation.
The second interpolation uses interpolated points from the first interpolation,
which can result in less accurate results.
Another reason is the fact that the evaluated image we start with can have a denser grid than the reference image,
and by resampling it, we would lose a lot of information.

Expand All @@ -78,9 +81,9 @@ and by resampling it, we would lose a lot of information.
:align: center
:scale: 133%

Squared distances from the center in limited area (circle) on a 2D evaluated image.
Squared distances from the center in the limited area (circle) on a 2D evaluated image.

An important part of this method is the order of the traversing through points in the limited area.
An important part of this method is the order of traversing through points in the limited area.
We start from the point at the center of the circle/sphere and traverse through subsequent points that are
increasingly distant from the center.
This order and the squared distances only need to be calculated once -- at the very beginning --
Expand All @@ -98,11 +101,11 @@ The subsequent steps of the algorithm are as follows:
#. Resample the evaluated image onto the grid of the reference image (YAGIT doesn't do this).
#. Create a list of points located within the circle/sphere, sorted in ascending order by their distance from the center
of the circle/sphere.
#. For each reference point :math:`\vec{r_r}` calculate gamma index :math:`\gamma(\vec{r_r})` searching within
#. For each reference point :math:`\vec{r_r}`, calculate the gamma index :math:`\gamma(\vec{r_r})` searching within
the limited region on the evaluated image:

A. Start searching in :math:`\vec{r_e} = \vec{r_r}` and move through the previously created list of points,
which are increasingly further from the initial point:
A. Start searching at :math:`\vec{r_e} = \vec{r_r}` and move through the previously created list of points,
which are increasingly farther from the initial point:

a. If the value :math:`r(\vec{r_r}, \vec{r_e}) / \Delta d` is greater than or equal to the current minimum
found value of the gamma function :math:`\Gamma` for :math:`\vec{r_r}`, then stop the search.
Expand All @@ -117,8 +120,8 @@ The subsequent steps of the algorithm are as follows:

The time complexity of such an algorithm is :math:`O(n^k * m^k)`,
where :math:`n` is the number of voxels along each axis,
:math:`k` is the dimensionality of the data
and :math:`m`` is the number of points along the radius of the circle/sphere.
:math:`k` is the dimensionality of the data,
and :math:`m` is the number of points along the radius of the circle/sphere.
Typically, the algorithm only traverses through a small portion of points within the circle/sphere,
so the average complexity is better.

Expand Down
41 changes: 20 additions & 21 deletions docs/source/data_representation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ Data representation
Coordinate system
-----------------

DICOM and MetaImage use patient (also known as anatomical) coordinate system that is relative to the patient.
If patient orientation on the couch is changed, then the patient coordinate system is rotated accordingly.
DICOM and MetaImage use the patient (also known as anatomical) coordinate system that is relative to the patient.
If the patient's orientation on the couch is changed, then the patient coordinate system is rotated accordingly.
Other coordinate systems used in radiotherapy are the couch (also known as fixed or room) coordinate system
and the beam (also known as gantry) coordinate system.

Expand All @@ -18,8 +18,8 @@ and the beam (also known as gantry) coordinate system.

L -- Left, R -- Right, A -- Anterior, P -- Posterior, S -- Superior, I -- Inferior.

Specifically, DICOM uses LPS (right to Left, anterior to Posterior, inferior to Superior) coordinate system.
MetaImage uses the same system, but it employs "from" notation instead of "to" notation, and thus denotes it as RAI
Specifically, DICOM uses the LPS (right to Left, anterior to Posterior, inferior to Superior) coordinate system.
MetaImage uses the same system, but it employs "from" notation instead of "to" notation, and thus it denotes it as RAI
(Right to left, Anterior to posterior, Inferior to superior).
YAGIT also uses this system.

Expand All @@ -33,17 +33,17 @@ It is also worth noting that LPS is a right-handed coordinate system.
Image planes
------------

The 3D image can be viewed from three perspectives: axial, coronal and sagittal.
The 3D image can be viewed from three perspectives: axial, coronal, and sagittal.
Axial (also known as transverse or horizontal) separates the superior from the inferior.
Coronal (also known as frontal) separates the anterior from the posterior.
Sagittal (also known as longitudinal) separates the left from the right.

.. figure:: _static/images/image_planes.svg
:alt: Image planes (axial, coronal and sagittal)
:alt: Image planes (axial, coronal, and sagittal)
:align: center
:scale: 135%

3D image planes -- axial, coronal and sagittal -- with coordinates consistent with LPS.
3D image planes -- axial, coronal, and sagittal -- with coordinates consistent with LPS.
The black filled squares represent the first voxel of an image.


Expand All @@ -58,11 +58,10 @@ The ith column, jth row, kth frame map to the coordinates xyz for the respective
A 3D image is stored in a file slice-by-slice using one out of three planes.
The plane used depends on the third letter of the coordinate system:
S or I for axial, A or P for coronal, L or R for sagittal.
In the LPS coordinate system, slices are stored using the axial plane.
In the LPS coordinate system, slices are stored in the axial plane.

Note that when calculating the gamma index, YAGIT does not take into account whether the plane of the image was changed
after reading from a file.
It treats all images as if they were stored in the axial plane.
after reading from a file. It treats all images as if they were stored in the axial plane.


Image position and voxel spacing
Expand All @@ -79,14 +78,14 @@ but if all three spacings are equal, the image is said to have isotropic voxel s

Additionally, DICOM allows for unevenly distributed spacing along the z-axis (spacing between slices),
as the spacing may be greater in areas that are farther from the main region of interest.
YAGIT doesn't support this kind of images.
YAGIT doesn't support this kind of image.

.. figure:: _static/images/position_and_spacing.svg
:alt: Image position and voxel spacing
:alt: Image position and pixel spacing
:align: center
:scale: 130%

Example of image position and pixel spacing for a 2D image.
An example of image position and pixel spacing for a 2D image.

:math:`T_x, T_y` -- image position, :math:`S_x, S_y` -- pixel spacing.

Expand All @@ -100,7 +99,7 @@ and whether they are oriented head-first or feet-first towards the machine.
The most basic orientation is HFS (Head First Supine).
In this case, the patient is positioned head-first towards the machine and lying on their back.

The image orientation is provided in the form of direction cosines of the first row and the first column,
The image orientation is provided in the form of the direction cosines of the first row and the first column,
as well as the normal vector to these two directions.

.. math::
Expand Down Expand Up @@ -146,15 +145,15 @@ There are 8 basic image orientations:
- FFDL -- Feet First Decubitus Left ``[ 0 1, 0; 1 0 0; 0 0 -1]``
- FFDR -- Feet First Decubitus Right ``[ 0 -1 0; -1 0 0; 0 0 -1]``

YAGIT supports only the HFS image orientation, for now.
YAGIT supports only the HFS image orientation for now.


Calculating xyz coordinates
---------------------------

To determine the xyz coordinates from the indexes ijk (ith column, jth row, kth frame),
the following formula should be used,
incorporating rotation (image orientation), scaling (voxel spacing) and translation (image position).
incorporating rotation (image orientation), scaling (voxel spacing), and translation (image position).

.. math::
\begin{bmatrix}
Expand Down Expand Up @@ -193,7 +192,7 @@ incorporating rotation (image orientation), scaling (voxel spacing) and translat
| :math:`T_x, T_y, T_z` -- xyz image positions of the first voxel.

This formula can be alternatively expressed using 4x4 affine matrix.
This formula can be alternatively expressed using a 4x4 affine matrix.

.. math::
\begin{bmatrix}
Expand Down Expand Up @@ -224,7 +223,7 @@ Accessing a single voxel in the image is done using the image coordinate system,
where columns, rows, and frames are numbered using indexes.

In YAGIT, the image indexes are written frame-first, column-last --
instead of using ijk (ith column, jth row, kth frame) it uses kji (kth frame, jth row, ith column).
instead of using ijk (ith column, jth row, kth frame), it uses kji (kth frame, jth row, ith column).
It's the same indexing as used in matrices and in most programming languages.

.. figure:: _static/images/3d_index.svg
Expand All @@ -236,7 +235,7 @@ It's the same indexing as used in matrices and in most programming languages.


YAGIT stores 2D and 3D images in the form of a linearized one-dimensional array.
It arranges single elements in memory according to the row-major order (in this case it is frame-major order).
It arranges single elements in memory according to the row-major order (in this case, it is the frame-major order).
DICOM and MetaImage also use this order.

.. figure:: _static/images/1d_index.svg
Expand All @@ -250,9 +249,9 @@ DICOM and MetaImage also use this order.
Data type
---------

Image data elements in YAGIT are stored using float (32-bit single precision floating point).
Image data elements in YAGIT are stored using a float (32-bit single precision floating point).
It provides 6--8 significant decimal digits of precision, which is sufficient for gamma index calculations.
In comparison, double (64-bit double precision floating point) provides 15--16 significant decimal digits of precision.
In comparison, a double (64-bit double precision floating point) provides 15--16 significant decimal digits of precision.

Thanks to the fact that a float has a size that is two times smaller than a double,
it has two times less memory usage, can fit twice as many elements in the SIMD registers,
Expand Down
Loading

0 comments on commit d19baa7

Please sign in to comment.