Skip to content

Commit

Permalink
Merge branch 'main' into deepinterp
Browse files Browse the repository at this point in the history
  • Loading branch information
alejoe91 authored Oct 17, 2023
2 parents 9d239ca + 4c76371 commit d4e0824
Show file tree
Hide file tree
Showing 188 changed files with 7,374 additions and 4,269 deletions.
5 changes: 5 additions & 0 deletions .github/actions/build-test-environment/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,11 @@ runs:
- name: git-annex install
run: |
wget https://downloads.kitenet.net/git-annex/linux/current/git-annex-standalone-amd64.tar.gz
mkdir /home/runner/work/installation
mv git-annex-standalone-amd64.tar.gz /home/runner/work/installation/
workdir=$(pwd)
cd /home/runner/work/installation
tar xvzf git-annex-standalone-amd64.tar.gz
echo "$(pwd)/git-annex.linux" >> $GITHUB_PATH
cd $workdir
shell: bash
2 changes: 1 addition & 1 deletion .github/workflows/installation-tips-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ jobs:
- name: Test Conda Environment Creation
uses: conda-incubator/[email protected]
with:
environment-file: ./installations_tips/full_spikeinterface_environment_${{ matrix.label }}.yml
environment-file: ./installation_tips/full_spikeinterface_environment_${{ matrix.label }}.yml
1 change: 1 addition & 0 deletions .github/workflows/test_containers_singularity_gpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,5 +46,6 @@ jobs:
- name: Run test singularity containers with GPU
env:
REPO_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
SPIKEINTERFACE_DEV_PATH: ${{ github.workspace }}
run: |
pytest -vv --capture=tee-sys -rA src/spikeinterface/sorters/external/tests/test_singularity_containers_gpu.py
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -188,3 +188,4 @@ test_folder/

# Mac OS
.DS_Store
test_data.json
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v4.5.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
Expand Down
18 changes: 10 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,15 +59,17 @@ With SpikeInterface, users can:
- post-process sorted datasets.
- compare and benchmark spike sorting outputs.
- compute quality metrics to validate and curate spike sorting outputs.
- visualize recordings and spike sorting outputs in several ways (matplotlib, sortingview, in jupyter)
- export report and export to phy
- offer a powerful Qt-based viewer in separate package [spikeinterface-gui](https://github.com/SpikeInterface/spikeinterface-gui)
- have some powerful sorting components to build your own sorter.
- visualize recordings and spike sorting outputs in several ways (matplotlib, sortingview, jupyter, ephyviewer)
- export a report and/or export to phy
- offer a powerful Qt-based viewer in a separate package [spikeinterface-gui](https://github.com/SpikeInterface/spikeinterface-gui)
- have powerful sorting components to build your own sorter.


## Documentation

Detailed documentation for spikeinterface can be found [here](https://spikeinterface.readthedocs.io/en/latest).
Detailed documentation of the latest PyPI release of SpikeInterface can be found [here](https://spikeinterface.readthedocs.io/en/0.98.2).

Detailed documentation of the development version of SpikeInterface can be found [here](https://spikeinterface.readthedocs.io/en/latest).

Several tutorials to get started can be found in [spiketutorials](https://github.com/SpikeInterface/spiketutorials).

Expand All @@ -77,9 +79,9 @@ and sorting components.
You can also have a look at the [spikeinterface-gui](https://github.com/SpikeInterface/spikeinterface-gui).


## How to install spikeinteface
## How to install spikeinterface

You can install the new `spikeinterface` version with pip:
You can install the latest version of `spikeinterface` version with pip:

```bash
pip install spikeinterface[full]
Expand All @@ -94,7 +96,7 @@ To install all interactive widget backends, you can use:
```


To get the latest updates, you can install `spikeinterface` from sources:
To get the latest updates, you can install `spikeinterface` from source:

```bash
git clone https://github.com/SpikeInterface/spikeinterface.git
Expand Down
13 changes: 8 additions & 5 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ spikeinterface.core
.. autofunction:: extract_waveforms
.. autofunction:: load_waveforms
.. autofunction:: compute_sparsity
.. autoclass:: ChannelSparsity
:members:
.. autoclass:: BinaryRecordingExtractor
.. autoclass:: ZarrRecordingExtractor
.. autoclass:: BinaryFolderRecording
Expand Down Expand Up @@ -48,17 +50,15 @@ spikeinterface.core
.. autofunction:: get_template_extremum_channel
.. autofunction:: get_template_extremum_channel_peak_shift
.. autofunction:: get_template_extremum_amplitude

..
.. autofunction:: read_binary
.. autofunction:: read_zarr
.. autofunction:: append_recordings
.. autofunction:: concatenate_recordings
.. autofunction:: split_recording
.. autofunction:: select_segment_recording
.. autofunction:: append_sortings
.. autofunction:: split_sorting
.. autofunction:: select_segment_sorting
.. autofunction:: read_binary
.. autofunction:: read_zarr

Low-level
~~~~~~~~~
Expand All @@ -67,7 +67,6 @@ Low-level
:noindex:

.. autoclass:: BaseWaveformExtractorExtension
.. autoclass:: ChannelSparsity
.. autoclass:: ChunkRecordingExecutor

spikeinterface.extractors
Expand All @@ -83,6 +82,7 @@ NEO-based
.. autofunction:: read_alphaomega_event
.. autofunction:: read_axona
.. autofunction:: read_biocam
.. autofunction:: read_binary
.. autofunction:: read_blackrock
.. autofunction:: read_ced
.. autofunction:: read_intan
Expand All @@ -104,6 +104,7 @@ NEO-based
.. autofunction:: read_spikegadgets
.. autofunction:: read_spikeglx
.. autofunction:: read_tdt
.. autofunction:: read_zarr


Non-NEO-based
Expand Down Expand Up @@ -216,8 +217,10 @@ spikeinterface.sorters
.. autofunction:: print_sorter_versions
.. autofunction:: get_sorter_description
.. autofunction:: run_sorter
.. autofunction:: run_sorter_jobs
.. autofunction:: run_sorters
.. autofunction:: run_sorter_by_property
.. autofunction:: read_sorter_folder

Low level
~~~~~~~~~
Expand Down
5 changes: 3 additions & 2 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,14 +118,15 @@
'examples_dirs': ['../examples/modules_gallery'],
'gallery_dirs': ['modules_gallery', ], # path where to save gallery generated examples
'subsection_order': ExplicitOrder([
'../examples/modules_gallery/core/',
'../examples/modules_gallery/extractors/',
'../examples/modules_gallery/core',
'../examples/modules_gallery/extractors',
'../examples/modules_gallery/qualitymetrics',
'../examples/modules_gallery/comparison',
'../examples/modules_gallery/widgets',
]),
'within_subsection_order': FileNameSortKey,
'ignore_pattern': '/generate_',
'nested_sections': False,
}

intersphinx_mapping = {
Expand Down
6 changes: 3 additions & 3 deletions doc/development/development.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ There are various ways to contribute to SpikeInterface as a user or developer. S
* Writing unit tests to expand code coverage and use case scenarios.
* Reporting bugs and issues.

We use a forking workflow <https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow>_ to manage contributions. Here's a summary of the steps involved, with more details available in the provided link:
We use a forking workflow `<https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow>`_ to manage contributions. Here's a summary of the steps involved, with more details available in the provided link:

* Fork the SpikeInterface repository.
* Create a new branch (e.g., :code:`git switch -c my-contribution`).
* Modify the code, commit, and push changes to your fork.
* Open a pull request from the "Pull Requests" tab of your fork to :code:`spikeinterface/main`.
* By following this process, we can review the code and even make changes as necessary.

While we appreciate all the contributions please be mindful of the cost of reviewing pull requests <https://rgommers.github.io/2019/06/the-cost-of-an-open-source-contribution/>_ .
While we appreciate all the contributions please be mindful of the cost of reviewing pull requests `<https://rgommers.github.io/2019/06/the-cost-of-an-open-source-contribution/>`_ .


How to run tests locally
Expand Down Expand Up @@ -201,7 +201,7 @@ Implement a new extractor
SpikeInterface already supports over 30 file formats, but the acquisition system you use might not be among the
supported formats list (***ref***). Most of the extractord rely on the `NEO <https://github.com/NeuralEnsemble/python-neo>`_
package to read information from files.
Therefore, to implement a new extractor to handle the unsupported format, we recommend make a new `neo.rawio `_ class.
Therefore, to implement a new extractor to handle the unsupported format, we recommend make a new :code:`neo.rawio.BaseRawIO` class (see `example <https://github.com/NeuralEnsemble/python-neo/blob/master/neo/rawio/examplerawio.py#L44>`_).
Once that is done, the new class can be easily wrapped into SpikeInterface as an extension of the
:py:class:`~spikeinterface.extractors.neoextractors.neobaseextractors.NeoBaseRecordingExtractor`
(for :py:class:`~spikeinterface.core.BaseRecording` objects) or
Expand Down
1 change: 1 addition & 0 deletions doc/how_to/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@ How to guides
get_started
analyse_neuropixels
handle_drift
load_matlab_data
101 changes: 101 additions & 0 deletions doc/how_to/load_matlab_data.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
Exporting MATLAB Data to Binary & Loading in SpikeInterface
===========================================================

In this tutorial, we will walk through the process of exporting data from MATLAB in a binary format and subsequently loading it using SpikeInterface in Python.

Exporting Data from MATLAB
--------------------------

Begin by ensuring your data structure is correct. Organize your data matrix so that the first dimension corresponds to samples/time and the second to channels.
Here, we present a MATLAB code that creates a random dataset and writes it to a binary file as an illustration.

.. code-block:: matlab
% Define the size of your data
numSamples = 1000;
numChannels = 384;
% Generate random data as an example
data = rand(numSamples, numChannels);
% Write the data to a binary file
fileID = fopen('your_data_as_a_binary.bin', 'wb');
fwrite(fileID, data, 'double');
fclose(fileID);
.. note::

In your own script, replace the random data generation with your actual dataset.

Loading Data in SpikeInterface
------------------------------

After executing the above MATLAB code, a binary file named :code:`your_data_as_a_binary.bin` will be created in your MATLAB directory. To load this file in Python, you'll need its full path.

Use the following Python script to load the binary data into SpikeInterface:

.. code-block:: python
import spikeinterface as si
from pathlib import Path
# Define file path
# For Linux or macOS:
file_path = Path("/The/Path/To/Your/Data/your_data_as_a_binary.bin")
# For Windows:
# file_path = Path(r"c:\path\to\your\data\your_data_as_a_binary.bin")
# Confirm file existence
assert file_path.is_file(), f"Error: {file_path} is not a valid file. Please check the path."
# Define recording parameters
sampling_frequency = 30_000.0 # Adjust according to your MATLAB dataset
num_channels = 384 # Adjust according to your MATLAB dataset
dtype = "float64" # MATLAB's double corresponds to Python's float64
# Load data using SpikeInterface
recording = si.read_binary(file_paths=file_path, sampling_frequency=sampling_frequency,
num_channels=num_channels, dtype=dtype)
# Confirm that the data was loaded correctly by comparing the data shapes and see they match the MATLAB data
print(recording.get_num_frames(), recording.get_num_channels())
Follow the steps above to seamlessly import your MATLAB data into SpikeInterface. Once loaded, you can harness the full power of SpikeInterface for data processing, including filtering, spike sorting, and more.

Common Pitfalls & Tips
----------------------

1. **Data Shape**: Make sure your MATLAB data matrix's first dimension is samples/time and the second is channels. If your time is in the second dimension, use :code:`time_axis=1` in :code:`si.read_binary()`.
2. **File Path**: Always double-check the Python file path.
3. **Data Type Consistency**: Ensure data types between MATLAB and Python are consistent. MATLAB's `double` is equivalent to Numpy's `float64`.
4. **Sampling Frequency**: Set the appropriate sampling frequency in Hz for SpikeInterface.
5. **Transition to Python**: Moving from MATLAB to Python can be challenging. For newcomers to Python, consider reviewing numpy's `Numpy for MATLAB Users <https://numpy.org/doc/stable/user/numpy-for-matlab-users.html>`_ guide.

Using gains and offsets for integer data
----------------------------------------

Raw data formats often store data as integer values for memory efficiency. To give these integers meaningful physical units, you can apply a gain and an offset.
In SpikeInterface, you can use the :code:`gain_to_uV` and :code:`offset_to_uV` parameters, since traces are handled in microvolts (uV). Both parameters can be integrated into the :code:`read_binary` function.
If your data in MATLAB is stored as :code:`int16`, and you know the gain and offset, you can use the following code to load the data:

.. code-block:: python
sampling_frequency = 30_000.0 # Adjust according to your MATLAB dataset
num_channels = 384 # Adjust according to your MATLAB dataset
dtype_int = 'int16' # Adjust according to your MATLAB dataset
gain_to_uV = 0.195 # Adjust according to your MATLAB dataset
offset_to_uV = 0 # Adjust according to your MATLAB dataset
recording = si.read_binary(file_paths=file_path, sampling_frequency=sampling_frequency,
num_channels=num_channels, dtype=dtype_int,
gain_to_uV=gain_to_uV, offset_to_uV=offset_to_uV)
recording.get_traces() # Return traces in original units [type: int]
recording.get_traces(return_scaled=True) # Return traces in micro volts (uV) [type: float]
This will equip your recording object with capabilities to convert the data to float values in uV using the :code:`get_traces()` method with the :code:`return_scaled` parameter set to :code:`True`.

.. note::

The gain and offset parameters are usually format dependent and you will need to find out the correct values for your data format. You can load your data without gain and offset but then the traces will be in integer values and not in uV.
Binary file added doc/images/plot_traces_ephyviewer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion doc/install_sorters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ Kilosort2.5

git clone https://github.com/MouseLand/Kilosort
# provide installation path by setting the KILOSORT2_5_PATH environment variable
# or using Kilosort2_5Sorter.set_kilosort2_path()
# or using Kilosort2_5Sorter.set_kilosort2_5_path()

* See also for Matlab/CUDA: https://www.mathworks.com/help/parallel-computing/gpu-support-by-release.html

Expand Down
Loading

0 comments on commit d4e0824

Please sign in to comment.