Skip to content

Commit

Permalink
Update
Browse files Browse the repository at this point in the history
  • Loading branch information
alejoe91 committed Sep 19, 2023
2 parents 16cf79e + d68f676 commit bb79637
Show file tree
Hide file tree
Showing 41 changed files with 192 additions and 131 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/installation-tips-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ jobs:
- name: Test Conda Environment Creation
uses: conda-incubator/[email protected]
with:
environment-file: ./installations_tips/full_spikeinterface_environment_${{ matrix.label }}.yml
environment-file: ./installation_tips/full_spikeinterface_environment_${{ matrix.label }}.yml
11 changes: 6 additions & 5 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ spikeinterface.core
.. autofunction:: extract_waveforms
.. autofunction:: load_waveforms
.. autofunction:: compute_sparsity
.. autoclass:: ChannelSparsity
:members:
.. autoclass:: BinaryRecordingExtractor
.. autoclass:: ZarrRecordingExtractor
.. autoclass:: BinaryFolderRecording
Expand Down Expand Up @@ -48,17 +50,15 @@ spikeinterface.core
.. autofunction:: get_template_extremum_channel
.. autofunction:: get_template_extremum_channel_peak_shift
.. autofunction:: get_template_extremum_amplitude

..
.. autofunction:: read_binary
.. autofunction:: read_zarr
.. autofunction:: append_recordings
.. autofunction:: concatenate_recordings
.. autofunction:: split_recording
.. autofunction:: select_segment_recording
.. autofunction:: append_sortings
.. autofunction:: split_sorting
.. autofunction:: select_segment_sorting
.. autofunction:: read_binary
.. autofunction:: read_zarr

Low-level
~~~~~~~~~
Expand All @@ -67,7 +67,6 @@ Low-level
:noindex:

.. autoclass:: BaseWaveformExtractorExtension
.. autoclass:: ChannelSparsity
.. autoclass:: ChunkRecordingExecutor

spikeinterface.extractors
Expand All @@ -83,6 +82,7 @@ NEO-based
.. autofunction:: read_alphaomega_event
.. autofunction:: read_axona
.. autofunction:: read_biocam
.. autofunction:: read_binary
.. autofunction:: read_blackrock
.. autofunction:: read_ced
.. autofunction:: read_intan
Expand All @@ -104,6 +104,7 @@ NEO-based
.. autofunction:: read_spikegadgets
.. autofunction:: read_spikeglx
.. autofunction:: read_tdt
.. autofunction:: read_zarr


Non-NEO-based
Expand Down
6 changes: 3 additions & 3 deletions doc/development/development.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ There are various ways to contribute to SpikeInterface as a user or developer. S
* Writing unit tests to expand code coverage and use case scenarios.
* Reporting bugs and issues.

We use a forking workflow <https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow>_ to manage contributions. Here's a summary of the steps involved, with more details available in the provided link:
We use a forking workflow `<https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow>`_ to manage contributions. Here's a summary of the steps involved, with more details available in the provided link:

* Fork the SpikeInterface repository.
* Create a new branch (e.g., :code:`git switch -c my-contribution`).
* Modify the code, commit, and push changes to your fork.
* Open a pull request from the "Pull Requests" tab of your fork to :code:`spikeinterface/main`.
* By following this process, we can review the code and even make changes as necessary.

While we appreciate all the contributions please be mindful of the cost of reviewing pull requests <https://rgommers.github.io/2019/06/the-cost-of-an-open-source-contribution/>_ .
While we appreciate all the contributions please be mindful of the cost of reviewing pull requests `<https://rgommers.github.io/2019/06/the-cost-of-an-open-source-contribution/>`_ .


How to run tests locally
Expand Down Expand Up @@ -201,7 +201,7 @@ Implement a new extractor
SpikeInterface already supports over 30 file formats, but the acquisition system you use might not be among the
supported formats list (***ref***). Most of the extractord rely on the `NEO <https://github.com/NeuralEnsemble/python-neo>`_
package to read information from files.
Therefore, to implement a new extractor to handle the unsupported format, we recommend make a new `neo.rawio `_ class.
Therefore, to implement a new extractor to handle the unsupported format, we recommend make a new :code:`neo.rawio.BaseRawIO` class (see `example <https://github.com/NeuralEnsemble/python-neo/blob/master/neo/rawio/examplerawio.py#L44>`_).
Once that is done, the new class can be easily wrapped into SpikeInterface as an extension of the
:py:class:`~spikeinterface.extractors.neoextractors.neobaseextractors.NeoBaseRecordingExtractor`
(for :py:class:`~spikeinterface.core.BaseRecording` objects) or
Expand Down
2 changes: 1 addition & 1 deletion doc/install_sorters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ Kilosort2.5

git clone https://github.com/MouseLand/Kilosort
# provide installation path by setting the KILOSORT2_5_PATH environment variable
# or using Kilosort2_5Sorter.set_kilosort2_path()
# or using Kilosort2_5Sorter.set_kilosort2_5_path()

* See also for Matlab/CUDA: https://www.mathworks.com/help/parallel-computing/gpu-support-by-release.html

Expand Down
6 changes: 3 additions & 3 deletions doc/modules/sorters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ There are three options:
1. **released PyPi version**: if you installed :code:`spikeinterface` with :code:`pip install spikeinterface`,
the latest released version will be installed in the container.

2. **development :code:`main` version**: if you installed :code:`spikeinterface` from source from the cloned repo
2. **development** :code:`main` **version**: if you installed :code:`spikeinterface` from source from the cloned repo
(with :code:`pip install .`) or with :code:`pip install git+https://github.com/SpikeInterface/spikeinterface.git`,
the current development version from the :code:`main` branch will be installed in the container.

Expand Down Expand Up @@ -458,7 +458,7 @@ Here is the list of external sorters accessible using the run_sorter wrapper:
* **Kilosort** :code:`run_sorter('kilosort')`
* **Kilosort2** :code:`run_sorter('kilosort2')`
* **Kilosort2.5** :code:`run_sorter('kilosort2_5')`
* **Kilosort3** :code:`run_sorter('Kilosort3')`
* **Kilosort3** :code:`run_sorter('kilosort3')`
* **PyKilosort** :code:`run_sorter('pykilosort')`
* **Klusta** :code:`run_sorter('klusta')`
* **Mountainsort4** :code:`run_sorter('mountainsort4')`
Expand All @@ -474,7 +474,7 @@ Here is the list of external sorters accessible using the run_sorter wrapper:
Here a list of internal sorter based on `spikeinterface.sortingcomponents`; they are totally
experimental for now:

* **Spyking circus2** :code:`run_sorter('spykingcircus2')`
* **Spyking Circus2** :code:`run_sorter('spykingcircus2')`
* **Tridesclous2** :code:`run_sorter('tridesclous2')`

In 2023, we expect to add many more sorters to this list.
Expand Down
4 changes: 2 additions & 2 deletions doc/modules/sortingcomponents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ Here is a short example that depends on the output of "Motion interpolation":
**Notes**:
* :code:`spatial_interpolation_method` "kriging" or "iwd" do not play a big role.
* :code:`border_mode` is a very important parameter. It controls how to deal with the border because motion causes units on the
* :code:`border_mode` is a very important parameter. It controls dealing with the border because motion causes units on the
border to not be present throughout the entire recording. We highly recommend the :code:`border_mode='remove_channels'`
because this removes channels on the border that will be impacted by drift. Of course the larger the motion is
the more channels are removed.
Expand Down Expand Up @@ -278,7 +278,7 @@ At the moment, there are five methods implemented:
* 'naive': a very naive implemenation used as a reference for benchmarks
* 'tridesclous': the algorithm for template matching implemented in Tridesclous
* 'circus': the algorithm for template matching implemented in SpyKING-Circus
* 'circus-omp': a updated algorithm similar to SpyKING-Circus but with OMP (orthogonal macthing
* 'circus-omp': a updated algorithm similar to SpyKING-Circus but with OMP (orthogonal matching
pursuit)
* 'wobble' : an algorithm loosely based on YASS that scales template amplitudes and shifts them in time
to match detected spikes
Expand Down
4 changes: 2 additions & 2 deletions src/spikeinterface/comparison/basecomparison.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,11 +262,11 @@ def get_ordered_agreement_scores(self):
indexes = np.arange(scores.shape[1])
order1 = []
for r in range(scores.shape[0]):
possible = indexes[~np.in1d(indexes, order1)]
possible = indexes[~np.isin(indexes, order1)]
if possible.size > 0:
ind = np.argmax(scores.iloc[r, possible].values)
order1.append(possible[ind])
remain = indexes[~np.in1d(indexes, order1)]
remain = indexes[~np.isin(indexes, order1)]
order1.extend(remain)
scores = scores.iloc[:, order1]

Expand Down
2 changes: 1 addition & 1 deletion src/spikeinterface/comparison/comparisontools.py
Original file line number Diff line number Diff line change
Expand Up @@ -538,7 +538,7 @@ def do_confusion_matrix(event_counts1, event_counts2, match_12, match_event_coun
matched_units2 = match_12[match_12 != -1].values

unmatched_units1 = match_12[match_12 == -1].index
unmatched_units2 = unit2_ids[~np.in1d(unit2_ids, matched_units2)]
unmatched_units2 = unit2_ids[~np.isin(unit2_ids, matched_units2)]

ordered_units1 = np.hstack([matched_units1, unmatched_units1])
ordered_units2 = np.hstack([matched_units2, unmatched_units2])
Expand Down
2 changes: 1 addition & 1 deletion src/spikeinterface/core/baserecording.py
Original file line number Diff line number Diff line change
Expand Up @@ -592,7 +592,7 @@ def _channel_slice(self, channel_ids, renamed_channel_ids=None):
def _remove_channels(self, remove_channel_ids):
from .channelslice import ChannelSliceRecording

new_channel_ids = self.channel_ids[~np.in1d(self.channel_ids, remove_channel_ids)]
new_channel_ids = self.channel_ids[~np.isin(self.channel_ids, remove_channel_ids)]
sub_recording = ChannelSliceRecording(self, new_channel_ids)
return sub_recording

Expand Down
2 changes: 1 addition & 1 deletion src/spikeinterface/core/basesnippets.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ def _channel_slice(self, channel_ids, renamed_channel_ids=None):
def _remove_channels(self, remove_channel_ids):
from .channelslice import ChannelSliceSnippets

new_channel_ids = self.channel_ids[~np.in1d(self.channel_ids, remove_channel_ids)]
new_channel_ids = self.channel_ids[~np.isin(self.channel_ids, remove_channel_ids)]
sub_recording = ChannelSliceSnippets(self, new_channel_ids)
return sub_recording

Expand Down
5 changes: 2 additions & 3 deletions src/spikeinterface/core/basesorting.py
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ def remove_units(self, remove_unit_ids):
"""
from spikeinterface import UnitsSelectionSorting

new_unit_ids = self.unit_ids[~np.in1d(self.unit_ids, remove_unit_ids)]
new_unit_ids = self.unit_ids[~np.isin(self.unit_ids, remove_unit_ids)]
new_sorting = UnitsSelectionSorting(self, new_unit_ids)
return new_sorting

Expand Down Expand Up @@ -473,8 +473,7 @@ def to_spike_vector(self, concatenated=True, extremum_channel_inds=None, use_cac
if not concatenated:
spikes_ = []
for segment_index in range(self.get_num_segments()):
s0 = np.searchsorted(spikes["segment_index"], segment_index, side="left")
s1 = np.searchsorted(spikes["segment_index"], segment_index + 1, side="left")
s0, s1 = np.searchsorted(spikes["segment_index"], [segment_index, segment_index + 1], side="left")
spikes_.append(spikes[s0:s1])
spikes = spikes_

Expand Down
4 changes: 2 additions & 2 deletions src/spikeinterface/core/generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ def generate_sorting(
)

if empty_units is not None:
keep = ~np.in1d(labels, empty_units)
keep = ~np.isin(labels, empty_units)
times = times[keep]
labels = labels[keep]

Expand Down Expand Up @@ -219,7 +219,7 @@ def add_synchrony_to_sorting(sorting, sync_event_ratio=0.3, seed=None):
sample_index = spike["sample_index"]
if sample_index not in units_used_for_spike:
units_used_for_spike[sample_index] = np.array([spike["unit_index"]])
units_not_used = unit_ids[~np.in1d(unit_ids, units_used_for_spike[sample_index])]
units_not_used = unit_ids[~np.isin(unit_ids, units_used_for_spike[sample_index])]

if len(units_not_used) == 0:
continue
Expand Down
12 changes: 4 additions & 8 deletions src/spikeinterface/core/node_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,8 +111,7 @@ def __init__(self, recording, peaks):
# precompute segment slice
self.segment_slices = []
for segment_index in range(recording.get_num_segments()):
i0 = np.searchsorted(peaks["segment_index"], segment_index)
i1 = np.searchsorted(peaks["segment_index"], segment_index + 1)
i0, i1 = np.searchsorted(peaks["segment_index"], [segment_index, segment_index + 1])
self.segment_slices.append(slice(i0, i1))

def get_trace_margin(self):
Expand All @@ -125,8 +124,7 @@ def compute(self, traces, start_frame, end_frame, segment_index, max_margin):
# get local peaks
sl = self.segment_slices[segment_index]
peaks_in_segment = self.peaks[sl]
i0 = np.searchsorted(peaks_in_segment["sample_index"], start_frame)
i1 = np.searchsorted(peaks_in_segment["sample_index"], end_frame)
i0, i1 = np.searchsorted(peaks_in_segment["sample_index"], [start_frame, end_frame])
local_peaks = peaks_in_segment[i0:i1]

# make sample index local to traces
Expand Down Expand Up @@ -183,8 +181,7 @@ def __init__(
# precompute segment slice
self.segment_slices = []
for segment_index in range(recording.get_num_segments()):
i0 = np.searchsorted(self.peaks["segment_index"], segment_index)
i1 = np.searchsorted(self.peaks["segment_index"], segment_index + 1)
i0, i1 = np.searchsorted(self.peaks["segment_index"], [segment_index, segment_index + 1])
self.segment_slices.append(slice(i0, i1))

def get_trace_margin(self):
Expand All @@ -197,8 +194,7 @@ def compute(self, traces, start_frame, end_frame, segment_index, max_margin):
# get local peaks
sl = self.segment_slices[segment_index]
peaks_in_segment = self.peaks[sl]
i0 = np.searchsorted(peaks_in_segment["sample_index"], start_frame)
i1 = np.searchsorted(peaks_in_segment["sample_index"], end_frame)
i0, i1 = np.searchsorted(peaks_in_segment["sample_index"], [start_frame, end_frame])
local_peaks = peaks_in_segment[i0:i1]

# make sample index local to traces
Expand Down
3 changes: 1 addition & 2 deletions src/spikeinterface/core/numpyextractors.py
Original file line number Diff line number Diff line change
Expand Up @@ -338,8 +338,7 @@ def get_unit_spike_train(self, unit_id, start_frame, end_frame):
if self.spikes_in_seg is None:
# the slicing of segment is done only once the first time
# this fasten the constructor a lot
s0 = np.searchsorted(self.spikes["segment_index"], self.segment_index, side="left")
s1 = np.searchsorted(self.spikes["segment_index"], self.segment_index + 1, side="left")
s0, s1 = np.searchsorted(self.spikes["segment_index"], [self.segment_index, self.segment_index + 1])
self.spikes_in_seg = self.spikes[s0:s1]

unit_index = self.unit_ids.index(unit_id)
Expand Down
7 changes: 6 additions & 1 deletion src/spikeinterface/core/recording_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ def get_chunk_with_margin(
return traces_chunk, left_margin, right_margin


def order_channels_by_depth(recording, channel_ids=None, dimensions=("x", "y")):
def order_channels_by_depth(recording, channel_ids=None, dimensions=("x", "y"), flip=False):
"""
Order channels by depth, by first ordering the x-axis, and then the y-axis.
Expand All @@ -316,6 +316,9 @@ def order_channels_by_depth(recording, channel_ids=None, dimensions=("x", "y")):
If str, it needs to be 'x', 'y', 'z'.
If tuple or list, it sorts the locations in two dimensions using lexsort.
This approach is recommended since there is less ambiguity, by default ('x', 'y')
flip: bool, default: False
If flip is False then the order is bottom first (starting from tip of the probe).
If flip is True then the order is upper first.
Returns
-------
Expand All @@ -341,6 +344,8 @@ def order_channels_by_depth(recording, channel_ids=None, dimensions=("x", "y")):
assert dim < ndim, "Invalid dimensions!"
locations_to_sort += (locations[:, dim],)
order_f = np.lexsort(locations_to_sort)
if flip:
order_f = order_f[::-1]
order_r = np.argsort(order_f, kind="stable")

return order_f, order_r
Expand Down
6 changes: 2 additions & 4 deletions src/spikeinterface/core/segmentutils.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,8 +174,7 @@ def get_traces(self, start_frame, end_frame, channel_indices):
# Return (0 * num_channels) array of correct dtype
return self.parent_segments[0].get_traces(0, 0, channel_indices)

i0 = np.searchsorted(self.cumsum_length, start_frame, side="right") - 1
i1 = np.searchsorted(self.cumsum_length, end_frame, side="right") - 1
i0, i1 = np.searchsorted(self.cumsum_length, [start_frame, end_frame], side="right") - 1

# several case:
# * come from one segment (i0 == i1)
Expand Down Expand Up @@ -469,8 +468,7 @@ def get_unit_spike_train(
if end_frame is None:
end_frame = self.get_num_samples()

i0 = np.searchsorted(self.cumsum_length, start_frame, side="right") - 1
i1 = np.searchsorted(self.cumsum_length, end_frame, side="right") - 1
i0, i1 = np.searchsorted(self.cumsum_length, [start_frame, end_frame], side="right") - 1

# several case:
# * come from one segment (i0 == i1)
Expand Down
2 changes: 2 additions & 0 deletions src/spikeinterface/core/tests/test_recording_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,11 +138,13 @@ def test_order_channels_by_depth():
order_1d, order_r1d = order_channels_by_depth(rec, dimensions="y")
order_2d, order_r2d = order_channels_by_depth(rec, dimensions=("x", "y"))
locations_rev = locations_copy[order_1d][order_r1d]
order_2d_fliped, order_r2d_fliped = order_channels_by_depth(rec, dimensions=("x", "y"), flip=True)

assert np.array_equal(locations[:, 1], locations_copy[order_1d][:, 1])
assert np.array_equal(locations_copy[order_1d][:, 1], locations_copy[order_2d][:, 1])
assert np.array_equal(locations, locations_copy[order_2d])
assert np.array_equal(locations_copy, locations_copy[order_2d][order_r2d])
assert np.array_equal(order_2d[::-1], order_2d_fliped)


if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion src/spikeinterface/core/tests/test_sparsity.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ def test_ChannelSparsity():

for key, v in sparsity.unit_id_to_channel_ids.items():
assert key in unit_ids
assert np.all(np.in1d(v, channel_ids))
assert np.all(np.isin(v, channel_ids))

for key, v in sparsity.unit_id_to_channel_indices.items():
assert key in unit_ids
Expand Down
Loading

0 comments on commit bb79637

Please sign in to comment.