Skip to content

Commit

Permalink
Merge pull request dipy#3317 from jhlegarreta/MiscDocImprovements
Browse files Browse the repository at this point in the history
DOC: Miscellaneous documentation improvements
  • Loading branch information
skoudoro authored Sep 2, 2024
2 parents ee2da04 + 4591927 commit f0f3dd3
Show file tree
Hide file tree
Showing 43 changed files with 319 additions and 356 deletions.
27 changes: 12 additions & 15 deletions dipy/align/streamlinear.py
Original file line number Diff line number Diff line change
Expand Up @@ -359,8 +359,7 @@ def __init__(
``x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])``
method : str,
'L_BFGS_B' or 'Powell' optimizers can be used. Default is
'L_BFGS_B'.
'L_BFGS_B' or 'Powell' optimizers can be used.
bounds : list of tuples or None,
If method == 'L_BFGS_B' then we can use bounded optimization.
Expand All @@ -371,14 +370,13 @@ def __init__(
verbose : bool, optional.
If True, if True then information about the optimization is shown.
Default: False.
options : None or dict,
Extra options to be used with the selected method.
evolution : boolean
If True save the transformation for each iteration of the
optimizer. Default is False. Supported only with Scipy >= 0.11.
optimizer. Supported only with Scipy >= 0.11.
num_threads : int, optional
Number of threads to be used for OpenMP parallelization. If None
Expand Down Expand Up @@ -1057,7 +1055,7 @@ def slr_with_qbx(
Moving streamlines.
x0 : str, optional.
rigid, similarity or affine transformation model (default affine)
rigid, similarity or affine transformation model
rm_small_clusters : int, optional
Remove clusters that have less than `rm_small_clusters`
Expand All @@ -1067,10 +1065,9 @@ def slr_with_qbx(
select_random : int, optional.
If not, None selects a random number of streamlines to apply clustering
Default None.
verbose : bool, optional
If True, logs information about optimization. Default: False
If True, logs information about optimization.
greater_than : int, optional
Keep streamlines that have length greater than this value.
Expand Down Expand Up @@ -1236,32 +1233,32 @@ def groupwise_slr(
List with streamlines of the bundles to be registered.
x0 : str, optional
rigid, similarity or affine transformation model. Default: affine.
rigid, similarity or affine transformation model.
tol : float, optional
Tolerance value to be used to assume convergence. Default: 0.
Tolerance value to be used to assume convergence.
max_iter : int, optional
Maximum number of iterations. Depending on the number of bundles to be
registered this may need to be larger. Default: 20.
registered this may need to be larger.
qbx_thr : variable int, optional
Thresholds for Quickbundles used for clustering streamlines and reduce
computational time. If None, no clustering is performed. Higher values
cluster streamlines into a smaller number of centroids. Default: [4].
cluster streamlines into a smaller number of centroids.
nb_pts : int, optional
Number of points for discretizing each streamline. Default: 20.
Number of points for discretizing each streamline.
select_random : int, optional
Maximum number of streamlines for each bundle. If None, all the
streamlines are used. Default: 10000.
streamlines are used.
verbose : bool, optional
If True, logs information. Default: False.
If True, logs information.
rng : np.random.Generator
If None, creates random generator in function. Default: None.
If None, creates random generator in function.
References
----------
Expand Down
13 changes: 6 additions & 7 deletions dipy/align/streamwarp.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,24 +73,23 @@ def bundlewarp(
moving : Streamlines
Target bundle that will be moved/registered to match the static bundle
dist : float, optional.
Precomputed distance matrix (default None)
dist : float, optional
Precomputed distance matrix.
alpha : float, optional
Represents the trade-off between regularizing the deformation and
having points match very closely. Lower value of alpha means high
deformations (default 0.3)
deformations.
beta : int, optional
Represents the strength of the interaction between points
Gaussian kernel size (default 20)
Gaussian kernel size.
max_iter : int, optional
Maximum number of iterations for deformation process in ml-CPD method
(default 15)
Maximum number of iterations for deformation process in ml-CPD method.
affine : boolean, optional
If False, use rigid registration as starting point (default True)
If False, use rigid registration as starting point.
Returns
-------
Expand Down
2 changes: 0 additions & 2 deletions dipy/core/gradients.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ def unique_bvals(bvals, bmag=None, rbvals=False):
rbvals : bool, optional
If True function also returns all individual rounded b-values.
Default: False
Returns
-------
Expand Down Expand Up @@ -1015,7 +1014,6 @@ def unique_bvals_magnitude(bvals, *, bmag=None, rbvals=False):
rbvals : bool, optional
If True function also returns all individual rounded b-values.
Default: False
Returns
-------
Expand Down
19 changes: 9 additions & 10 deletions dipy/core/optimize.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,24 +251,24 @@ def sparse_nnls(
X : ndarray. May be either sparse or dense. Shape (N, M)
The regressors
momentum : float, optional (default: 1).
momentum : float, optional
The persistence of the gradient.
step_size : float, optional (default: 0.01).
step_size : float, optional
The increment of parameter update in each iteration
non_neg : Boolean, optional (default: True)
non_neg : Boolean, optional
Whether to enforce non-negativity of the solution.
check_error_iter : int (default:10)
check_error_iter : int, optional
How many rounds to run between error evaluation for
convergence-checking.
max_error_checks : int (default: 10)
max_error_checks : int, optional
Don't check errors more than this number of times if no improvement in
r-squared is seen.
converge_on_sse : float (default: 0.99)
converge_on_sse : float, optional
a percentage improvement in SSE that is required each time to say
that things are still going well.
Expand Down Expand Up @@ -397,9 +397,9 @@ def __init__(self, m, *, A=None, L=None):
----------
m : int
Positive int indicating the number of regressors.
A : array (t = m + k + 1, p, p) (optional)
A : array (t = m + k + 1, p, p), optional
Constraint matrices $A$.
L : array (m, m) (optional)
L : array (m, m), optional
Regularization matrix $L$.
Default: None.
Expand Down Expand Up @@ -493,12 +493,11 @@ def solve(self, design_matrix, measurements, *, check=False, **kwargs):
Design matrix.
measurements : array (n)
Measurements.
check : boolean (optional)
check : boolean, optional
If True check whether the unconstrained optimization solution
already satisfies the constraints, before running the constrained
optimization. This adds overhead, but can avoid unnecessary
constrained optimization calls.
Default: False
kwargs : keyword arguments
Arguments passed to the CVXPY solve method.
Expand Down
8 changes: 4 additions & 4 deletions dipy/data/fetcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -2605,14 +2605,14 @@ def fetch_hcp(
subjects : list
Each item is an integer, identifying one of the HCP subjects
hcp_bucket : string, optional
The name of the HCP S3 bucket. Default: "hcp-openaccess"
The name of the HCP S3 bucket.
profile_name : string, optional
The name of the AWS profile used for access. Default: "hcp"
The name of the AWS profile used for access.
path : string, optional
Path to save files into. Defaults to the value of the ``DIPY_HOME``
environment variable is set; otherwise, defaults to ``$HOME/.dipy``.
study : string, optional
Which HCP study to grab. Default: 'HCP_1200'
Which HCP study to grab.
aws_access_key_id : string, optional
AWS credentials to HCP AWS S3. Will only be used if `profile_name` is
set to False.
Expand Down Expand Up @@ -2838,7 +2838,7 @@ def fetch_hbn(subjects, *, path=None, include_afq=False):
environment variable is set; otherwise, defaults to ``$HOME/.dipy``.
include_afq : bool, optional
Whether to include pyAFQ derivatives. Default: False
Whether to include pyAFQ derivatives
Returns
-------
Expand Down
8 changes: 4 additions & 4 deletions dipy/denoise/enhancement_kernel.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -40,14 +40,14 @@ cdef class EnhancementKernel:
Angular diffusion
t : float
Diffusion time
force_recompute : boolean
force_recompute : boolean, optional
Always compute the look-up table even if it is available
in cache. Default is False.
orientations : integer or Sphere object
in cache.
orientations : integer or Sphere object, optional
Specify the number of orientations to be used with
electrostatic repulsion, or provide a Sphere object.
The default sphere is 'repulsion100'.
verbose : boolean
verbose : boolean, optional
Enable verbose mode.
References
Expand Down
3 changes: 0 additions & 3 deletions dipy/denoise/gibbs.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,14 +250,11 @@ def gibbs_removal(vol, *, slice_axis=2, n_points=3, inplace=True, num_processes=
Matrix containing one volume (3D) or multiple (4D) volumes of images.
slice_axis : int (0, 1, or 2)
Data axis corresponding to the number of acquired slices.
Default is set to the third axis.
n_points : int, optional
Number of neighbour points to access local TV (see note).
Default is set to 3.
inplace : bool, optional
If True, the input data is replaced with results. Otherwise, returns
a new array.
Default is set to True.
num_processes : int or None, optional
Split the calculation to a pool of children processes. This only
applies to 3D or 4D `data` arrays. Default is 1. If < 0 the maximal
Expand Down
48 changes: 24 additions & 24 deletions dipy/denoise/localpca.py
Original file line number Diff line number Diff line change
Expand Up @@ -198,23 +198,23 @@ def genpca(
Array of data to be denoised. The dimensions are (X, Y, Z, N), where N
are the diffusion gradient directions. The first 3 dimension must have
size >= 2 * patch_radius + 1 or size = 1.
sigma : float or 3D array (optional)
sigma : float or 3D array, optional
Standard deviation of the noise estimated from the data. If no sigma
is given, this will be estimated based on random matrix theory
:footcite:p:`Veraart2016b`, :footcite:p:`Veraart2016c`.
mask : 3D boolean array (optional)
mask : 3D boolean array, optional
A mask with voxels that are true inside the brain and false outside of
it. The function denoises within the true part and returns zeros
outside of those voxels.
patch_radius : int or 1D array (optional)
patch_radius : int or 1D array, optional
The radius of the local patch to be taken around each voxel (in
voxels). E.g. patch_radius=2 gives 5x5x5 patches.
pca_method : 'eig' or 'svd' (optional)
pca_method : 'eig' or 'svd', optional
Use either eigenvalue decomposition (eig) or singular value
decomposition (svd) for principal component analysis. The default
method is 'eig' which is faster. However, occasionally 'svd' might be
more accurate.
tau_factor : float (optional)
tau_factor : float, optional
Thresholding of PCA eigenvalues is done by nulling out eigenvalues that
are smaller than:
Expand All @@ -225,12 +225,12 @@ def genpca(
$\tau_{factor}$ can be set to a predefined values (e.g. $\tau_{factor} =
2.3$ :footcite:p:`Manjon2013`), or automatically calculated using random
matrix theory (in case that $\tau_{factor}$ is set to None).
return_sigma : bool (optional)
return_sigma : bool, optional
If true, the Standard deviation of the noise will be returned.
out_dtype : str or dtype (optional)
out_dtype : str or dtype, optional
The dtype for the output array. Default: output has the same dtype as
the input.
suppress_warning : bool (optional)
suppress_warning : bool, optional
If true, suppress warning caused by patch_size < arr.shape[-1].
Returns
Expand Down Expand Up @@ -413,30 +413,30 @@ def localpca(
arr : 4D array
Array of data to be denoised. The dimensions are (X, Y, Z, N), where N
are the diffusion gradient directions.
sigma : float or 3D array (optional)
sigma : float or 3D array, optional
Standard deviation of the noise estimated from the data. If not given,
calculate using method in :footcite:t:`Manjon2013`.
mask : 3D boolean array (optional)
mask : 3D boolean array, optional
A mask with voxels that are true inside the brain and false outside of
it. The function denoises within the true part and returns zeros
outside of those voxels.
patch_radius : int or 1D array (optional)
patch_radius : int or 1D array, optional
The radius of the local patch to be taken around each voxel (in
voxels). E.g. patch_radius=2 gives 5x5x5 patches.
gtab: gradient table object (optional if sigma is provided)
gradient information for the data gives us the bvals and bvecs of
diffusion data, which is needed to calculate noise level if sigma is
not provided.
patch_radius_sigma : int (optional)
patch_radius_sigma : int, optional
The radius of the local patch to be taken around each voxel (in
voxels) for estimating sigma. E.g. patch_radius_sigma=2 gives
5x5x5 patches.
pca_method : 'eig' or 'svd' (optional)
pca_method : 'eig' or 'svd', optional
Use either eigenvalue decomposition (eig) or singular value
decomposition (svd) for principal component analysis. The default
method is 'eig' which is faster. However, occasionally 'svd' might be
more accurate.
tau_factor : float (optional)
tau_factor : float, optional
Thresholding of PCA eigenvalues is done by nulling out eigenvalues that
are smaller than:
Expand All @@ -449,16 +449,16 @@ def localpca(
set to None, it will be automatically calculated using the
Marcenko-Pastur distribution :footcite:p:`Veraart2016c`. Default: 2.3
according to :footcite:t:`Manjon2013`.
return_sigma : bool (optional)
return_sigma : bool, optional
If true, a noise standard deviation estimate based on the
Marcenko-Pastur distribution is returned :footcite:p:`Veraart2016c`.
correct_bias : bool (optional)
correct_bias : bool, optional
Whether to correct for bias due to Rician noise. This is an
implementation of equation 8 in :footcite:p:`Manjon2013`.
out_dtype : str or dtype (optional)
out_dtype : str or dtype, optional
The dtype for the output array. Default: output has the same dtype as
the input.
suppress_warning : bool (optional)
suppress_warning : bool, optional
If true, suppress warning caused by patch_size < arr.shape[-1].
Returns
Expand Down Expand Up @@ -519,25 +519,25 @@ def mppca(
arr : 4D array
Array of data to be denoised. The dimensions are (X, Y, Z, N), where N
are the diffusion gradient directions.
mask : 3D boolean array (optional)
mask : 3D boolean array, optional
A mask with voxels that are true inside the brain and false outside of
it. The function denoises within the true part and returns zeros
outside of those voxels.
patch_radius : int or 1D array (optional)
patch_radius : int or 1D array, optional
The radius of the local patch to be taken around each voxel (in
voxels). E.g. patch_radius=2 gives 5x5x5 patches.
pca_method : 'eig' or 'svd' (optional)
pca_method : 'eig' or 'svd', optional
Use either eigenvalue decomposition (eig) or singular value
decomposition (svd) for principal component analysis. The default
method is 'eig' which is faster. However, occasionally 'svd' might be
more accurate.
return_sigma : bool (optional)
return_sigma : bool, optional
If true, a noise standard deviation estimate based on the
Marcenko-Pastur distribution is returned :footcite:p:`Veraart2016b`.
out_dtype : str or dtype (optional)
out_dtype : str or dtype, optional
The dtype for the output array. Default: output has the same dtype as
the input.
suppress_warning : bool (optional)
suppress_warning : bool, optional
If true, suppress warning caused by patch_size < arr.shape[-1].
Returns
Expand Down
Loading

0 comments on commit f0f3dd3

Please sign in to comment.