Skip to content

Commit

Permalink
Unify 'defaults' and use of quotations across docstrings (#2134)
Browse files Browse the repository at this point in the history
* Unify 'defaults'

* sorters wip and fix imports

* Update src/spikeinterface/core/waveform_tools.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/core/waveform_tools.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/core/waveform_tools.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/core/waveform_tools.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/widgets/amplitudes.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/widgets/unit_waveforms.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/extractors/mdaextractors.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/extractors/mdaextractors.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/widgets/traces.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/widgets/template_similarity.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/widgets/traces.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/extractors/mdaextractors.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/extractors/neoextractors/tdt.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/extractors/neoextractors/tdt.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/extractors/neoextractors/spikeglx.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/preprocessing/remove_artifacts.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/extractors/neoextractors/spikeglx.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/extractors/neoextractors/spikegadgets.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/extractors/neoextractors/spikegadgets.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/preprocessing/filter.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/postprocessing/correlograms.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/preprocessing/remove_artifacts.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/postprocessing/isi.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/preprocessing/average_across_direction.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/preprocessing/directional_derivative.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/preprocessing/directional_derivative.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/preprocessing/directional_derivative.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/preprocessing/depth_order.py

Co-authored-by: Zach McKenzie <[email protected]>

* sortingcomponents and more fixes

* More quotation handling (core)

* Final round of cleaning

* Add unsaved files

* Import hdbscan when needed and add ros3 to pytest.markers

* Update src/spikeinterface/core/baserecordingsnippets.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/core/baserecordingsnippets.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/core/baserecordingsnippets.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/core/job_tools.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/core/job_tools.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/curation/curationsorting.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/curation/sortingview_curation.py

Co-authored-by: Zach McKenzie <[email protected]>

* Update src/spikeinterface/core/core_tools.py

Co-authored-by: Zach McKenzie <[email protected]>

---------

Co-authored-by: Zach McKenzie <[email protected]>
  • Loading branch information
alejoe91 and zm711 authored Oct 31, 2023
1 parent fd84c8f commit 886530a
Show file tree
Hide file tree
Showing 155 changed files with 2,075 additions and 2,109 deletions.
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,7 @@ markers = [
"widgets",
"sortingcomponents",
"streaming_extractors: extractors that require streaming such as ross and fsspec",
"ros3_test"
]
filterwarnings =[
'ignore:.*distutils Version classes are deprecated.*:DeprecationWarning',
Expand Down
2 changes: 1 addition & 1 deletion src/spikeinterface/comparison/basecomparison.py
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ class BasePairComparison(BaseComparison):
It handles the matching procedurs.
Agreement scores must be computed in inherited classes by overriding the
'_do_agreement(self)' function
"_do_agreement(self)" function
"""

def __init__(self, object1, object2, name1, name2, match_score=0.5, chance_score=0.1, verbose=False):
Expand Down
12 changes: 6 additions & 6 deletions src/spikeinterface/comparison/comparisontools.py
Original file line number Diff line number Diff line change
Expand Up @@ -570,7 +570,7 @@ def do_confusion_matrix(event_counts1, event_counts2, match_12, match_event_coun
def do_count_score(event_counts1, event_counts2, match_12, match_event_count):
"""
For each ground truth units count how many:
'tp', 'fn', 'cl', 'fp', 'num_gt', 'num_tested', 'tested_id'
"tp", "fn", "cl", "fp", "num_gt", "num_tested", "tested_id"
Parameters
----------
Expand Down Expand Up @@ -634,8 +634,8 @@ def compute_performance(count_score):
Note :
* we don't have TN because it do not make sens here.
* 'accuracy' = 'tp_rate' because TN=0
* 'recall' = 'sensitivity'
* "accuracy" = "tp_rate" because TN=0
* "recall" = "sensitivity"
"""
import pandas as pd

Expand Down Expand Up @@ -674,7 +674,7 @@ def make_matching_events(times1, times2, delta):
Returns
-------
matching_event: numpy array dtype = ['index1', 'index2', 'delta']
matching_event: numpy array dtype = ["index1", "index2", "delta"]
1d of collision
"""
times_concat = np.concatenate((times1, times2))
Expand Down Expand Up @@ -731,8 +731,8 @@ def make_collision_events(sorting, delta):
-------
collision_events: numpy array
dtype = [('index1', 'int64'), ('unit_id1', 'int64'),
('index2', 'int64'), ('unit_id2', 'int64'),
('delta', 'int64')]
('index2', 'int64'), ('unit_id2', 'int64'),
('delta', 'int64')]
1d of all collision
"""
unit_ids = np.array(sorting.get_unit_ids())
Expand Down
46 changes: 23 additions & 23 deletions src/spikeinterface/comparison/multicomparisons.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,22 +25,22 @@ class MultiSortingComparison(BaseMultiComparison, MixinSpikeTrainComparison):
----------
sorting_list: list
List of sorting extractor objects to be compared
name_list: list
List of spike sorter names. If not given, sorters are named as 'sorter0', 'sorter1', 'sorter2', etc.
delta_time: float
Number of ms to consider coincident spikes (default 0.4 ms)
match_score: float
Minimum agreement score to match units (default 0.5)
chance_score: float
Minimum agreement score to for a possible match (default 0.1)
n_jobs: int
name_list: list, default: None
List of spike sorter names. If not given, sorters are named as "sorter0", "sorter1", "sorter2", etc.
delta_time: float, default: 0.4
Number of ms to consider coincident spikes
match_score: float, default: 0.5
Minimum agreement score to match units
chance_score: float, default: 0.1
Minimum agreement score to for a possible match
n_jobs: int, default: -1
Number of cores to use in parallel. Uses all available if -1
spiketrain_mode: str
spiketrain_mode: "union" | "intersection", default: "union"
Mode to extract agreement spike trains:
- 'union': spike trains are the union between the spike trains of the best matching two sorters
- 'intersection': spike trains are the intersection between the spike trains of the
- "union": spike trains are the union between the spike trains of the best matching two sorters
- "intersection": spike trains are the intersection between the spike trains of the
best matching two sorters
verbose: bool
verbose: bool, default: False
if True, output is verbose
Returns
Expand Down Expand Up @@ -156,15 +156,15 @@ def _do_agreement_matrix(self, minimum_agreement=1):

def get_agreement_sorting(self, minimum_agreement_count=1, minimum_agreement_count_only=False):
"""
Returns AgreementSortingExtractor with units with a 'minimum_matching' agreement.
Returns AgreementSortingExtractor with units with a "minimum_matching" agreement.
Parameters
----------
minimum_agreement_count: int
Minimum number of matches among sorters to include a unit.
minimum_agreement_count_only: bool
If True, only units with agreement == 'minimum_matching' are included.
If False, units with an agreement >= 'minimum_matching' are included
If True, only units with agreement == "minimum_matching" are included.
If False, units with an agreement >= "minimum_matching" are included
Returns
-------
Expand Down Expand Up @@ -309,13 +309,13 @@ class MultiTemplateComparison(BaseMultiComparison, MixinTemplateComparison):
----------
waveform_list: list
List of waveform extractor objects to be compared
name_list: list
List of session names. If not given, sorters are named as 'sess0', 'sess1', 'sess2', etc.
match_score: float
Minimum agreement score to match units (default 0.5)
chance_score: float
Minimum agreement score to for a possible match (default 0.1)
verbose: bool
name_list: list, default: None
List of session names. If not given, sorters are named as "sess0", "sess1", "sess2", etc.
match_score: float, default: 0.8
Minimum agreement score to match units
chance_score: float, default: 0.3
Minimum agreement score to for a possible match
verbose: bool, default: False
if True, output is verbose
Returns
Expand Down
112 changes: 56 additions & 56 deletions src/spikeinterface/comparison/paircomparisons.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,19 +111,19 @@ class SymmetricSortingComparison(BasePairSorterComparison):
The first sorting for the comparison
sorting2: SortingExtractor
The second sorting for the comparison
sorting1_name: str
sorting1_name: str, default: None
The name of sorter 1
sorting2_name: : str
sorting2_name: : str, default: None
The name of sorter 2
delta_time: float
Number of ms to consider coincident spikes (default 0.4 ms)
match_score: float
Minimum agreement score to match units (default 0.5)
chance_score: float
Minimum agreement score to for a possible match (default 0.1)
n_jobs: int
delta_time: float, default: 0.4
Number of ms to consider coincident spikes
match_score: float, default: 0.5
Minimum agreement score to match units
chance_score: float, default: 0.1
Minimum agreement score to for a possible match
n_jobs: int, default: -1
Number of cores to use in parallel. Uses all available if -1
verbose: bool
verbose: bool, default: False
If True, output is verbose
Returns
Expand All @@ -139,7 +139,6 @@ def __init__(
sorting1_name=None,
sorting2_name=None,
delta_time=0.4,
sampling_frequency=None,
match_score=0.5,
chance_score=0.1,
n_jobs=-1,
Expand Down Expand Up @@ -214,34 +213,35 @@ class GroundTruthComparison(BasePairSorterComparison):
The first sorting for the comparison
tested_sorting: SortingExtractor
The second sorting for the comparison
gt_name: str
gt_name: str, default: None
The name of sorter 1
tested_name: : str
tested_name: : str, default: None
The name of sorter 2
delta_time: float
Number of ms to consider coincident spikes (default 0.4 ms) match_score: float
Minimum agreement score to match units (default 0.5)
chance_score: float
Minimum agreement score to for a possible match (default 0.1)
redundant_score: float
Agreement score above which units are redundant (default 0.2)
overmerged_score: float
Agreement score above which units can be overmerged (default 0.2)
well_detected_score: float
Agreement score above which units are well detected (default 0.8)
exhaustive_gt: bool (default True)
delta_time: float, default: 0.4
Number of ms to consider coincident spikes
match_score: float, default: 0.5
Minimum agreement score to match units
chance_score: float, default: 0.1
Minimum agreement score to for a possible match
redundant_score: float, default: 0.2
Agreement score above which units are redundant
overmerged_score: float, default: 0.2
Agreement score above which units can be overmerged
well_detected_score: float, default: 0.8
Agreement score above which units are well detected
exhaustive_gt: bool, default: False
Tell if the ground true is "exhaustive" or not. In other world if the
GT have all possible units. It allows more performance measurement.
For instance, MEArec simulated dataset have exhaustive_gt=True
match_mode: 'hungarian', or 'best'
What is match used for counting : 'hungarian' or 'best match'.
n_jobs: int
match_mode: "hungarian" | "best", default: "hungarian"
The method to match units
n_jobs: int, default: -1
Number of cores to use in parallel. Uses all available if -1
compute_labels: bool
If True, labels are computed at instantiation (default False)
compute_misclassifications: bool
If True, misclassifications are computed at instantiation (default False)
verbose: bool
compute_labels: bool, default: False
If True, labels are computed at instantiation
compute_misclassifications: bool, default: False
If True, misclassifications are computed at instantiation
verbose: bool, default: False
If True, output is verbose
Returns
Expand Down Expand Up @@ -379,21 +379,21 @@ def _do_score_labels(self):
def get_performance(self, method="by_unit", output="pandas"):
"""
Get performance rate with several method:
* 'raw_count' : just render the raw count table
* 'by_unit' : render perf as rate unit by unit of the GT
* 'pooled_with_average' : compute rate unit by unit and average
* "raw_count" : just render the raw count table
* "by_unit" : render perf as rate unit by unit of the GT
* "pooled_with_average" : compute rate unit by unit and average
Parameters
----------
method: str
'by_unit', or 'pooled_with_average'
output: str
'pandas' or 'dict'
method: "by_unit" | "pooled_with_average", default: "by_unit"
The method to compute performance
output: "pandas" | "dict", default: "pandas"
The output format
Returns
-------
perf: pandas dataframe/series (or dict)
dataframe/series (based on 'output') with performance entries
dataframe/series (based on "output") with performance entries
"""
import pandas as pd

Expand Down Expand Up @@ -471,7 +471,7 @@ def get_well_detected_units(self, well_detected_score=None):
Parameters
----------
well_detected_score: float (default 0.8)
well_detected_score: float, default: None
The agreement score above which tested units
are counted as "well detected".
"""
Expand Down Expand Up @@ -507,7 +507,7 @@ def get_false_positive_units(self, redundant_score=None):
Parameters
----------
redundant_score: float (default 0.2)
redundant_score: float, default: None
The agreement score below which tested units
are counted as "false positive"" (and not "redundant").
"""
Expand Down Expand Up @@ -547,7 +547,7 @@ def get_redundant_units(self, redundant_score=None):
Parameters
----------
redundant_score=None: float (default 0.2)
redundant_score=None: float, default: None
The agreement score above which tested units
are counted as "redundant" (and not "false positive" ).
"""
Expand Down Expand Up @@ -582,8 +582,8 @@ def get_overmerged_units(self, overmerged_score=None):
Parameters
----------
overmerged_score: float (default 0.4)
Tested units with 2 or more agreement scores above 'overmerged_score'
overmerged_score: float, default: None
Tested units with 2 or more agreement scores above "overmerged_score"
are counted as "overmerged".
"""
assert self.exhaustive_gt, "overmerged_units list is valid only if exhaustive_gt=True"
Expand Down Expand Up @@ -693,16 +693,16 @@ class TemplateComparison(BasePairComparison, MixinTemplateComparison):
The first waveform extractor to get templates to compare
we2 : WaveformExtractor
The second waveform extractor to get templates to compare
unit_ids1 : list, optional
List of units from we1 to compare, by default None
unit_ids2 : list, optional
List of units from we2 to compare, by default None
similarity_method : str, optional
Method for the similaroty matrix, by default "cosine_similarity"
sparsity_dict : dict, optional
Dictionary for sparsity, by default None
verbose : bool, optional
If True, output is verbose, by default False
unit_ids1 : list, default: None
List of units from we1 to compare
unit_ids2 : list, default: None
List of units from we2 to compare
similarity_method : str, default: "cosine_similarity"
Method for the similaroty matrix
sparsity_dict : dict, default: None
Dictionary for sparsity
verbose : bool, default: False
If True, output is verbose
Returns
-------
Expand Down
Loading

0 comments on commit 886530a

Please sign in to comment.