Skip to content

Commit

Permalink
Merge pull request dipy#3297 from jhlegarreta/FixMiscDocFormatting
Browse files Browse the repository at this point in the history
DOC: Miscellaneous doc formatting fixes
  • Loading branch information
skoudoro authored Aug 1, 2024
2 parents 6d30ad4 + f8f1302 commit 71d9e7f
Show file tree
Hide file tree
Showing 31 changed files with 226 additions and 165 deletions.
9 changes: 6 additions & 3 deletions dipy/align/streamlinear.py
Original file line number Diff line number Diff line change
Expand Up @@ -912,20 +912,23 @@ def progressive_slr(
):
"""Progressive SLR.
This is an utility function that allows for example to do affine
This is a utility function that allows for example to do affine
registration using Streamline-based Linear Registration (SLR)
[Garyfallidis15]_ by starting with translation first, then rigid,
then similarity, scaling and finally affine.
Similarly, if for example, you want to perform rigid then you start with
translation first. This progressive strategy can helps with finding the
translation first. This progressive strategy can help with finding the
optimal parameters of the final transformation.
Parameters
----------
static : Streamlines
Static streamlines.
moving : Streamlines
Moving streamlines.
metric : StreamlineDistanceMetric
Distance metric for registration optimization.
x0 : string
Could be any of 'translation', 'rigid', 'similarity', 'scaling',
'affine'
Expand All @@ -935,7 +938,7 @@ def progressive_slr(
method : string
L_BFGS_B' or 'Powell' optimizers can be used. Default is 'L_BFGS_B'.
verbose : bool, optional.
If True, log messages. Default:
If True, log messages.
num_threads : int, optional
Number of threads to be used for OpenMP parallelization. If None
(default) the value of OMP_NUM_THREADS environment variable is used
Expand Down
2 changes: 1 addition & 1 deletion dipy/core/optimize.py
Original file line number Diff line number Diff line change
Expand Up @@ -382,7 +382,7 @@ def fit(self, X, y):
class PositiveDefiniteLeastSquares:
@warning_for_keywords()
def __init__(self, m, *, A=None, L=None):
r"""Regularized least squares with linear matrix inequality constraints
r"""Regularized least squares with linear matrix inequality constraints [1]_.
Generate a CVXPY representation of a regularized least squares
optimization problem subject to linear matrix inequality constraints.
Expand Down
10 changes: 5 additions & 5 deletions dipy/denoise/localpca.py
Original file line number Diff line number Diff line change
Expand Up @@ -220,13 +220,13 @@ def genpca(
Thresholding of PCA eigenvalues is done by nulling out eigenvalues that
are smaller than:
.. math ::
.. math::
\tau = (\tau_{factor} \sigma)^2
\tau_{factor} can be set to a predefined values (e.g. \tau_{factor} =
2.3 [3]_), or automatically calculated using random matrix theory
(in case that \tau_{factor} is set to None).
$\tau_{factor}$ can be set to a predefined values (e.g. $\tau_{factor} =
2.3$ [3]_), or automatically calculated using random matrix theory
(in case that $\tau_{factor}$ is set to None).
return_sigma : bool (optional)
If true, the Standard deviation of the noise will be returned.
out_dtype : str or dtype (optional)
Expand Down Expand Up @@ -450,7 +450,7 @@ def localpca(
Thresholding of PCA eigenvalues is done by nulling out eigenvalues that
are smaller than:
.. math ::
.. math::
\tau = (\tau_{factor} \sigma)^2
Expand Down
2 changes: 1 addition & 1 deletion dipy/denoise/nlmeans.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def nlmeans(
rician=True,
num_threads=None,
):
r"""Non-local means for denoising 3D and 4D images
r"""Non-local means for denoising 3D and 4D images [Descoteaux08]_.
Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion dipy/nn/histo_resdnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ class HistoResDNN:
def __init__(self, *, sh_order_max=8, basis_type="tournier07", verbose=False):
r"""
The model was re-trained for usage with a different basis function
('tournier07') like the proposed model in [1, 2].
('tournier07') like the proposed model in [1]_, [2]_.
To obtain the pre-trained model, use::
>>> resdnn_model = HistoResDNN() # skip if not have_tf
Expand Down
8 changes: 5 additions & 3 deletions dipy/reconst/cross_validation.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
"""Cross-validation analysis of diffusion models."""
"""
Cross-validation analysis of diffusion models.
"""

import numpy as np

Expand All @@ -16,7 +18,7 @@ def coeff_of_determination(data, model, axis=-1):
model : ndarray
The predictions of a model for this data. Same shape as the data.
axis: int, optional
The axis along which different samples are laid out (default: -1).
The axis along which different samples are laid out.
Returns
-------
Expand Down Expand Up @@ -54,7 +56,7 @@ def coeff_of_determination(data, model, axis=-1):
def kfold_xval(model, data, folds, *model_args, **model_kwargs):
"""Perform k-fold cross-validation.
It generate out-of-sample predictions for each measurement.
It generates out-of-sample predictions for each measurement.
Parameters
----------
Expand Down
8 changes: 4 additions & 4 deletions dipy/reconst/csdeconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -376,8 +376,8 @@ def __init__(
diffusion ODF as the QballModel or the CsaOdfModel. This results in a
sharper angular profile with better angular resolution. The Constrained
SDTModel is similar to the Constrained CSDModel but mathematically it
deconvolves the q-ball ODF as oppposed to the HARDI signal (see [1]_
for a comparison and a through discussion).
deconvolves the q-ball ODF as opposed to the HARDI signal (see [1]_
for a comparison and a thorough discussion).
A sharp fODF is obtained because a single fiber *response* function is
injected as *a priori* knowledge. In the SDTModel, this response is a
Expand Down Expand Up @@ -498,7 +498,7 @@ def forward_sdt_deconv_mat(ratio, l_values, r2_term=False):
ratio = $\frac{\lambda_2}{\lambda_1}$ of the single fiber response
function
l_values : ndarray (N,)
The order (l) of spherical harmonic function associated with each row
The order ($l$) of spherical harmonic function associated with each row
of the deconvolution matrix. Only even orders are allowed.
r2_term : bool
True if ODF comes from an ODF computed from a model using the $r^2$
Expand Down Expand Up @@ -877,7 +877,7 @@ def odf_sh_to_sharp(
ratio of the smallest vs the largest eigenvalue of the single prolate
tensor response function (:math:`\frac{\lambda_2}{\lambda_1}`)
sh_order_max : int
maximal SH order (l) of the SH representation
maximal SH order ($l$) of the SH representation
lambda_ : float
lambda parameter (see odfdeconv) (default 1.0)
tau : float
Expand Down
32 changes: 17 additions & 15 deletions dipy/reconst/dki.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
#!/usr/bin/python
"""Classes and functions for fitting the diffusion kurtosis model"""
"""
Classes and functions for fitting the diffusion kurtosis model.
"""

import warnings

Expand Down Expand Up @@ -421,7 +423,7 @@ def _F2m(a, b, c):


def directional_diffusion(dt, V, min_diffusivity=0):
r"""Calculate the apparent diffusion coefficient (adc) in each direction
r"""Calculate the apparent diffusion coefficient (ADC) in each direction
of a sphere for a single voxel [1]_
Parameters
Expand All @@ -439,7 +441,7 @@ def directional_diffusion(dt, V, min_diffusivity=0):
Returns
-------
adc : ndarray (g,)
Apparent diffusion coefficient (adc) in all g directions of a sphere
Apparent diffusion coefficient (ADC) in all g directions of a sphere
for a single voxel.
References
Expand Down Expand Up @@ -491,7 +493,7 @@ def directional_diffusion_variance(kt, V, min_kurtosis=-3 / 7):
(theoretical kurtosis limit for regions that consist of water confined
to spherical pores [1]_)
adc : ndarray(g,) (optional)
Apparent diffusion coefficient (adc) in all g directions of a sphere
Apparent diffusion coefficient (ADC) in all g directions of a sphere
for a single voxel.
adv : ndarray(g,) (optional)
Apparent diffusion variance coefficient (advc) in all g directions of
Expand Down Expand Up @@ -568,7 +570,7 @@ def directional_kurtosis(
(theoretical kurtosis limit for regions that consist of water confined
to spherical pores [3]_)
adc : ndarray(g,) (optional)
Apparent diffusion coefficient (adc) in all g directions of a sphere
Apparent diffusion coefficient (ADC) in all g directions of a sphere
for a single voxel.
adv : ndarray(g,) (optional)
Apparent diffusion variance (advc) in all g directions of a sphere for
Expand Down Expand Up @@ -649,15 +651,15 @@ def apparent_kurtosis_coef(
For each sphere direction with coordinates $(n_{1}, n_{2}, n_{3})$, the
calculation of AKC is done using formula [1]_:
.. math ::
.. math::
AKC(n)=\frac{MD^{2}}{ADC(n)^{2}}\sum_{i=1}^{3}\sum_{j=1}^{3}
\sum_{k=1}^{3}\sum_{l=1}^{3}n_{i}n_{j}n_{k}n_{l}W_{ijkl}
where $W_{ijkl}$ are the elements of the kurtosis tensor, MD the mean
diffusivity and ADC the apparent diffusion coefficient computed as:
.. math ::
.. math::
ADC(n)=\sum_{i=1}^{3}\sum_{j=1}^{3}n_{i}n_{j}D_{ij}
Expand Down Expand Up @@ -1038,8 +1040,8 @@ def radial_kurtosis(
.. math::
RK \equiv \frac{1}{2\pi} \int d\Omega _\mathbf{\theta} K(\mathbf{\theta})
\delta (\mathbf{\theta}\cdot \mathbf{e}_1)
RK \equiv \frac{1}{2\pi} \int d\Omega _\mathbf{\theta} K(\mathbf{\theta})
\delta (\mathbf{\theta}\cdot \mathbf{e}_1)
This equation can be numerically computed by averaging apparent
directional kurtosis samples for directions perpendicular to e1. [2]_
Expand Down Expand Up @@ -1615,7 +1617,7 @@ def kurtosis_fractional_anisotropy(dki_params):
KFA \equiv
\frac{||\mathbf{W} - MKT \mathbf{I}^{(4)}||_F}{||\mathbf{W}||_F}
where $W$ is the kurtosis tensor, MKT the kurtosis tensor mean, $I^(4)$ is
where $W$ is the kurtosis tensor, MKT the kurtosis tensor mean, $I^{(4)}$ is
the fully symmetric rank 2 isotropic tensor and $||...||_F$ is the tensor's
Frobenius norm [1]_.
Expand Down Expand Up @@ -2038,15 +2040,15 @@ def akc(self, sphere):
For each sphere direction with coordinates $(n_{1}, n_{2}, n_{3})$, the
calculation of AKC is done using formula:
.. math ::
.. math::
AKC(n)=\frac{MD^{2}}{ADC(n)^{2}}\sum_{i=1}^{3}\sum_{j=1}^{3}
\sum_{k=1}^{3}\sum_{l=1}^{3}n_{i}n_{j}n_{k}n_{l}W_{ijkl}
where $W_{ijkl}$ are the elements of the kurtosis tensor, MD the mean
diffusivity and ADC the apparent diffusion coefficient computed as:
.. math ::
.. math::
ADC(n)=\sum_{i=1}^{3}\sum_{j=1}^{3}n_{i}n_{j}D_{ij}
Expand Down Expand Up @@ -2246,8 +2248,8 @@ def rk(self, min_kurtosis=-3.0 / 7, max_kurtosis=10, analytical=True):
.. math::
RK \equiv \frac{1}{2\pi} \int d\Omega _\mathbf{\theta}
K(\mathbf{\theta}) \delta (\mathbf{\theta}\cdot \mathbf{e}_1)
RK \equiv \frac{1}{2\pi} \int d\Omega _\mathbf{\theta}
K(\mathbf{\theta}) \delta (\mathbf{\theta}\cdot \mathbf{e}_1)
This equation can be numerically computed by averaging apparent
directional kurtosis samples for directions perpendicular to e1 [2]_.
Expand Down Expand Up @@ -2433,7 +2435,7 @@ def kfa(self):
KFA \equiv
\frac{||\mathbf{W} - MKT \mathbf{I}^{(4)}||_F}{||\mathbf{W}||_F}
where $W$ is the kurtosis tensor, MKT the kurtosis tensor mean, $I^(4)$
where $W$ is the kurtosis tensor, MKT the kurtosis tensor mean, $I^{(4)}$
is the fully symmetric rank 2 isotropic tensor and $||...||_F$ is the
tensor's Frobenius norm [1]_.
Expand Down
7 changes: 5 additions & 2 deletions dipy/reconst/dki_micro.py
Original file line number Diff line number Diff line change
Expand Up @@ -620,13 +620,16 @@ def predict(self, gtab, S0=1.0):
S0 : float or ndarray (optional)
The non diffusion-weighted signal in every voxel, or across all
voxels. Default: 1
voxels.
Notes
-----
The predicted signal is given by:
$S(\theta, b) = S_0 * [f * e^{-b ADC_{r}} + (1-f) * e^{-b ADC_{h}]$,
.. math::
S(\theta, b) = S_0 * [f * e^{-b ADC_{r}} + (1-f) * e^{-b ADC_{h}]
where $ADC_{r}$ and $ADC_{h}$ are the apparent diffusion coefficients
of the diffusion hindered and restricted compartment for a given
direction $\theta$, $b$ is the b value provided in the GradientTable
Expand Down
8 changes: 4 additions & 4 deletions dipy/reconst/dsi.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def __init__(
spin displacements) can be estimated by applying 3D FFT to the signal
values $S(\mathbf{q})$
..math::
.. math::
:nowrap:
\begin{eqnarray}
P(\mathbf{r}) & = & S_{0}^{-1}\int S(\mathbf{q})\exp(-i2\pi\mathbf{q}\cdot\mathbf{r})d\mathbf{r}
Expand Down Expand Up @@ -240,7 +240,7 @@ def rtop_pdf(self, normalized=True):
def msd_discrete(self, normalized=True):
r"""Calculates the mean squared displacement on the discrete propagator
..math::
.. math::
:nowrap:
\begin{equation}
MSD:{DSI}=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} P(\hat{\mathbf{r}}) \cdot \hat{\mathbf{r}}^{2} \ dr_x \ dr_y \ dr_z
Expand Down Expand Up @@ -284,7 +284,7 @@ def msd_discrete(self, normalized=True):
def odf(self, sphere):
r"""Calculates the real discrete odf for a given discrete sphere
..math::
.. math::
:nowrap:
\begin{equation}
\psi_{DSI}(\hat{\mathbf{u}})=\int_{0}^{\infty}P(r\hat{\mathbf{u}})r^{2}dr
Expand Down Expand Up @@ -509,7 +509,7 @@ def __init__(
The idea is to remove the convolution on the DSI propagator that is
caused by the truncation of the q-space in the DSI sampling.
..math::
.. math::
:nowrap:
\begin{eqnarray*}
P_{dsi}(\mathbf{r}) & = & S_{0}^{-1}\iiint\limits_{\| \mathbf{q} \| \le \mathbf{q_{max}}} S(\mathbf{q})\exp(-i2\pi\mathbf{q}\cdot\mathbf{r})d\mathbf{q} \\
Expand Down
Loading

0 comments on commit 71d9e7f

Please sign in to comment.