Skip to content

Commit

Permalink
DOC: Remove default values from docstrings
Browse files Browse the repository at this point in the history
Remove default values from docstrings: reduces the maintenance burden
and avoids the risk of inconsistencies if the default value is changed
in the method signature but not the parameter docstring.
  • Loading branch information
jhlegarreta committed Sep 2, 2024
1 parent 000c3fe commit 4591927
Show file tree
Hide file tree
Showing 30 changed files with 94 additions and 128 deletions.
27 changes: 12 additions & 15 deletions dipy/align/streamlinear.py
Original file line number Diff line number Diff line change
Expand Up @@ -359,8 +359,7 @@ def __init__(
``x0 = np.array([0, 0, 0, 0, 0, 0, 1., 1., 1, 0, 0, 0])``
method : str,
'L_BFGS_B' or 'Powell' optimizers can be used. Default is
'L_BFGS_B'.
'L_BFGS_B' or 'Powell' optimizers can be used.
bounds : list of tuples or None,
If method == 'L_BFGS_B' then we can use bounded optimization.
Expand All @@ -371,14 +370,13 @@ def __init__(
verbose : bool, optional.
If True, if True then information about the optimization is shown.
Default: False.
options : None or dict,
Extra options to be used with the selected method.
evolution : boolean
If True save the transformation for each iteration of the
optimizer. Default is False. Supported only with Scipy >= 0.11.
optimizer. Supported only with Scipy >= 0.11.
num_threads : int, optional
Number of threads to be used for OpenMP parallelization. If None
Expand Down Expand Up @@ -1057,7 +1055,7 @@ def slr_with_qbx(
Moving streamlines.
x0 : str, optional.
rigid, similarity or affine transformation model (default affine)
rigid, similarity or affine transformation model
rm_small_clusters : int, optional
Remove clusters that have less than `rm_small_clusters`
Expand All @@ -1067,10 +1065,9 @@ def slr_with_qbx(
select_random : int, optional.
If not, None selects a random number of streamlines to apply clustering
Default None.
verbose : bool, optional
If True, logs information about optimization. Default: False
If True, logs information about optimization.
greater_than : int, optional
Keep streamlines that have length greater than this value.
Expand Down Expand Up @@ -1236,32 +1233,32 @@ def groupwise_slr(
List with streamlines of the bundles to be registered.
x0 : str, optional
rigid, similarity or affine transformation model. Default: affine.
rigid, similarity or affine transformation model.
tol : float, optional
Tolerance value to be used to assume convergence. Default: 0.
Tolerance value to be used to assume convergence.
max_iter : int, optional
Maximum number of iterations. Depending on the number of bundles to be
registered this may need to be larger. Default: 20.
registered this may need to be larger.
qbx_thr : variable int, optional
Thresholds for Quickbundles used for clustering streamlines and reduce
computational time. If None, no clustering is performed. Higher values
cluster streamlines into a smaller number of centroids. Default: [4].
cluster streamlines into a smaller number of centroids.
nb_pts : int, optional
Number of points for discretizing each streamline. Default: 20.
Number of points for discretizing each streamline.
select_random : int, optional
Maximum number of streamlines for each bundle. If None, all the
streamlines are used. Default: 10000.
streamlines are used.
verbose : bool, optional
If True, logs information. Default: False.
If True, logs information.
rng : np.random.Generator
If None, creates random generator in function. Default: None.
If None, creates random generator in function.
References
----------
Expand Down
11 changes: 5 additions & 6 deletions dipy/align/streamwarp.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,23 +74,22 @@ def bundlewarp(
Target bundle that will be moved/registered to match the static bundle
dist : float, optional
Precomputed distance matrix (default None)
Precomputed distance matrix.
alpha : float, optional
Represents the trade-off between regularizing the deformation and
having points match very closely. Lower value of alpha means high
deformations (default 0.3)
deformations.
beta : int, optional
Represents the strength of the interaction between points
Gaussian kernel size (default 20)
Gaussian kernel size.
max_iter : int, optional
Maximum number of iterations for deformation process in ml-CPD method
(default 15)
Maximum number of iterations for deformation process in ml-CPD method.
affine : boolean, optional
If False, use rigid registration as starting point (default True)
If False, use rigid registration as starting point.
Returns
-------
Expand Down
2 changes: 0 additions & 2 deletions dipy/core/gradients.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ def unique_bvals(bvals, bmag=None, rbvals=False):
rbvals : bool, optional
If True function also returns all individual rounded b-values.
Default: False
Returns
-------
Expand Down Expand Up @@ -1015,7 +1014,6 @@ def unique_bvals_magnitude(bvals, *, bmag=None, rbvals=False):
rbvals : bool, optional
If True function also returns all individual rounded b-values.
Default: False
Returns
-------
Expand Down
7 changes: 3 additions & 4 deletions dipy/core/optimize.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,13 +251,13 @@ def sparse_nnls(
X : ndarray. May be either sparse or dense. Shape (N, M)
The regressors
momentum : float, optional (default: 1).
momentum : float, optional
The persistence of the gradient.
step_size : float, optional (default: 0.01).
step_size : float, optional
The increment of parameter update in each iteration
non_neg : Boolean, optional (default: True)
non_neg : Boolean, optional
Whether to enforce non-negativity of the solution.
check_error_iter : int, optional
Expand Down Expand Up @@ -498,7 +498,6 @@ def solve(self, design_matrix, measurements, *, check=False, **kwargs):
already satisfies the constraints, before running the constrained
optimization. This adds overhead, but can avoid unnecessary
constrained optimization calls.
Default: False
kwargs : keyword arguments
Arguments passed to the CVXPY solve method.
Expand Down
8 changes: 4 additions & 4 deletions dipy/data/fetcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -2605,14 +2605,14 @@ def fetch_hcp(
subjects : list
Each item is an integer, identifying one of the HCP subjects
hcp_bucket : string, optional
The name of the HCP S3 bucket. Default: "hcp-openaccess"
The name of the HCP S3 bucket.
profile_name : string, optional
The name of the AWS profile used for access. Default: "hcp"
The name of the AWS profile used for access.
path : string, optional
Path to save files into. Defaults to the value of the ``DIPY_HOME``
environment variable is set; otherwise, defaults to ``$HOME/.dipy``.
study : string, optional
Which HCP study to grab. Default: 'HCP_1200'
Which HCP study to grab.
aws_access_key_id : string, optional
AWS credentials to HCP AWS S3. Will only be used if `profile_name` is
set to False.
Expand Down Expand Up @@ -2838,7 +2838,7 @@ def fetch_hbn(subjects, *, path=None, include_afq=False):
environment variable is set; otherwise, defaults to ``$HOME/.dipy``.
include_afq : bool, optional
Whether to include pyAFQ derivatives. Default: False
Whether to include pyAFQ derivatives
Returns
-------
Expand Down
2 changes: 1 addition & 1 deletion dipy/denoise/enhancement_kernel.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ cdef class EnhancementKernel:
Diffusion time
force_recompute : boolean, optional
Always compute the look-up table even if it is available
in cache. Default is False.
in cache.
orientations : integer or Sphere object, optional
Specify the number of orientations to be used with
electrostatic repulsion, or provide a Sphere object.
Expand Down
3 changes: 0 additions & 3 deletions dipy/denoise/gibbs.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,14 +250,11 @@ def gibbs_removal(vol, *, slice_axis=2, n_points=3, inplace=True, num_processes=
Matrix containing one volume (3D) or multiple (4D) volumes of images.
slice_axis : int (0, 1, or 2)
Data axis corresponding to the number of acquired slices.
Default is set to the third axis.
n_points : int, optional
Number of neighbour points to access local TV (see note).
Default is set to 3.
inplace : bool, optional
If True, the input data is replaced with results. Otherwise, returns
a new array.
Default is set to True.
num_processes : int or None, optional
Split the calculation to a pool of children processes. This only
applies to 3D or 4D `data` arrays. Default is 1. If < 0 the maximal
Expand Down
12 changes: 5 additions & 7 deletions dipy/denoise/noise_estimate.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,23 +178,21 @@ def _piesno_3D(
alpha : float, optional
Probabilistic estimation threshold for the gamma function.
Default: 0.01.
step : int, optional
number of initial estimates for sigma to try. Default: 100.
number of initial estimates for sigma to try.
itermax : int, optional
Maximum number of iterations to execute if convergence
is not reached. Default: 100.
is not reached.
eps : float, optional
Tolerance for the convergence criterion. Convergence is
reached if two subsequent estimates are smaller than eps.
Default: 1e-5.
return_mask : bool, optional
If True, return a mask identifying all the pure noise voxel
that were found. Default: False.
that were found.
initial_estimation : float, optional
Upper bound for the initial estimation of sigma. default : None,
Expand Down Expand Up @@ -292,11 +290,11 @@ def estimate_sigma(arr, *, disable_background_masking=False, N=0):
arr : 3D or 4D ndarray
The array to be estimated
disable_background_masking : bool, default False
disable_background_masking : bool, optional
If True, uses all voxels for the estimation, otherwise, only non-zeros
voxels are used. Useful if the background is masked by the scanner.
N : int, default 0
N : int, optional
Number of coils of the receiver array. Use N = 1 in case of a SENSE
reconstruction (Philips scanners) or the number of coils for a GRAPPA
reconstruction (Siemens and GE). Use 0 to disable the correction factor,
Expand Down
10 changes: 5 additions & 5 deletions dipy/io/image.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ def load_nifti_data(fname, *, as_ndarray=True):
as_ndarray: bool, optional
convert nibabel ArrayProxy to a numpy.ndarray.
If you want to save memory and delay this casting, just turn this
option to False (default: True)
option to False.
Returns
-------
Expand Down Expand Up @@ -48,18 +48,18 @@ def load_nifti(
Full path to a nifti file.
return_img : bool, optional
Whether to return the nibabel nifti img object. Default: False
Whether to return the nibabel nifti img object.
return_voxsize: bool, optional
Whether to return the nifti header zooms. Default: False
Whether to return the nifti header zooms.
return_coords : bool, optional
Whether to return the nifti header aff2axcodes. Default: False
Whether to return the nifti header aff2axcodes.
as_ndarray: bool, optional
convert nibabel ArrayProxy to a numpy.ndarray.
If you want to save memory and delay this casting, just turn this
option to False (default: True)
option to False.
Returns
-------
Expand Down
1 change: 0 additions & 1 deletion dipy/nn/deepn4.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,6 @@ def __init__(self, *, verbose=False):
----------
verbose : bool, optional
Whether to show information about the processing.
Default: False
"""

if not have_tf:
Expand Down
1 change: 0 additions & 1 deletion dipy/nn/evac.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,6 @@ def __init__(self, *, verbose=False):
----------
verbose : bool, optional
Whether to show information about the processing.
Default: False
"""

if not have_tf:
Expand Down
3 changes: 1 addition & 2 deletions dipy/nn/histo_resdnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,12 +74,11 @@ def __init__(self, *, sh_order_max=8, basis_type="tournier07", verbose=False):
Maximum SH order (l) in the SH fit. For ``sh_order_max``, there
will be
``(sh_order_max + 1) * (sh_order_max + 2) / 2`` SH coefficients
for a symmetric basis. Default: 8
for a symmetric basis.
basis_type : {'tournier07', 'descoteaux07'}, optional
``tournier07`` (default) or ``descoteaux07``.
verbose : bool, optional
Whether to show information about the processing.
Default: False
References
----------
Expand Down
1 change: 0 additions & 1 deletion dipy/nn/synb0.py
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,6 @@ def __init__(self, *, verbose=False):
----------
verbose : bool, optional
Whether to show information about the processing.
Default: False
"""

if not have_tf:
Expand Down
25 changes: 12 additions & 13 deletions dipy/reconst/csdeconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,9 +224,8 @@ def __init__(
will be used as deconvolution kernel :footcite:p:`Tournier2007`.
reg_sphere : Sphere, optional
sphere used to build the regularization B matrix.
Default: 'symmetric362'.
sh_order_max : int, optional
maximal spherical harmonics order (l). Default: 8
maximal spherical harmonics order (l).
lambda_ : float, optional
weight given to the constrained-positivity regularization part of
the deconvolution equation (see :footcite:p:`Tournier2007`).
Expand All @@ -235,7 +234,7 @@ def __init__(
fODF is assumed to be zero. Ideally, tau should be set to
zero. However, to improve the stability of the algorithm, tau is
set to tau*100 % of the mean fODF amplitude (here, 10% by default)
(see :footcite:p:`Tournier2007`). Default: 0.1.
(see :footcite:p:`Tournier2007`).
convergence : int, optional
Maximum number of iterations to allow the deconvolution to
converge.
Expand Down Expand Up @@ -757,7 +756,7 @@ def odf_deconv(odf_sh, R, B_reg, lambda_=1.0, tau=0.1, r2_term=False):
``(sh_order_max + 1)(sh_order_max + 2)/2``)
SH basis matrix used for deconvolution
lambda_ : float, optional
lambda parameter in minimization equation (default 1.0)
lambda parameter in minimization equation
tau : float, optional
threshold (``tau *max(fODF)``) controlling the amplitude below
which the corresponding fODF is assumed to be zero.
Expand Down Expand Up @@ -880,7 +879,7 @@ def odf_sh_to_sharp(
sh_order_max : int, optional
maximal SH order ($l$) of the SH representation
lambda_ : float, optional
lambda parameter (see odfdeconv) (default 1.0)
lambda parameter (see odfdeconv)
tau : float, optional
tau parameter in the L matrix construction (see odfdeconv)
r2_term : bool, optional
Expand Down Expand Up @@ -1144,30 +1143,30 @@ def recursive_response(
shape `data.shape[0:3]` and dtype=bool. Default: use the entire data
array.
sh_order_max : int, optional
maximal spherical harmonics order (l). Default: 8
maximal spherical harmonics order (l).
peak_thr : float, optional
peak threshold, how large the second peak can be relative to the first
peak in order to call it a single fiber population
:footcite:p:`Tax2014`. Default: 0.01
:footcite:p:`Tax2014`.
init_fa : float, optional
FA of the initial 'fat' response function (tensor). Default: 0.08
FA of the initial 'fat' response function (tensor).
init_trace : float, optional
trace of the initial 'fat' response function (tensor). Default: 0.0021
trace of the initial 'fat' response function (tensor).
iter : int, optional
maximum number of iterations for calibration. Default: 8.
maximum number of iterations for calibration.
convergence : float, optional
convergence criterion, maximum relative change of SH
coefficients. Default: 0.001.
coefficients.
parallel : bool, optional
Whether to use parallelization in peak-finding during the calibration
procedure. Default: True
procedure.
num_processes : int, optional
If `parallel` is True, the number of subprocesses to use
(default multiprocessing.cpu_count()). If < 0 the maximal number of
cores minus ``num_processes + 1`` is used (enter -1 to use as many
cores as possible). 0 raises an error.
sphere : Sphere, optional.
The sphere used for peak finding. Default: default_sphere.
The sphere used for peak finding.
Returns
-------
Expand Down
Loading

0 comments on commit 4591927

Please sign in to comment.