Skip to content

Commit

Permalink
Merge pull request #3457 from SpikeInterface/prepare_release
Browse files Browse the repository at this point in the history
Prepare release 0.101.2
  • Loading branch information
alejoe91 authored Oct 4, 2024
2 parents ea0d3b8 + b1ba8ef commit 7725cb2
Show file tree
Hide file tree
Showing 11 changed files with 95 additions and 16 deletions.
1 change: 1 addition & 0 deletions doc/development/development.rst
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,7 @@ Miscelleaneous Stylistic Conventions
#. Avoid using abbreviations in variable names (e.g. use :code:`recording` instead of :code:`rec`). It is especially important to avoid single letter variables.
#. Use index as singular and indices for plural following the NumPy convention. Avoid idx or indexes. Plus, id and ids are reserved for identifiers (i.e. channel_ids)
#. We use file_path and folder_path (instead of file_name and folder_name) for clarity.
#. For the titles of documentation pages, only capitalize the first letter of the first word and classes or software packages. For example, "How to use a SortingAnalyzer in SpikeInterface".
#. For creating headers to divide sections of code we use the following convention (see issue `#3019 <https://github.com/SpikeInterface/spikeinterface/issues/3019>`_):


Expand Down
2 changes: 1 addition & 1 deletion doc/how_to/combine_recordings.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Combine Recordings in SpikeInterface
Combine recordings in SpikeInterface
====================================

In this tutorial we will walk through combining multiple recording objects. Sometimes this occurs due to hardware
Expand Down
2 changes: 1 addition & 1 deletion doc/how_to/load_matlab_data.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Export MATLAB Data to Binary & Load in SpikeInterface
Export MATLAB data to binary & load in SpikeInterface
========================================================

In this tutorial, we will walk through the process of exporting data from MATLAB in a binary format and subsequently loading it using SpikeInterface in Python.
Expand Down
4 changes: 2 additions & 2 deletions doc/how_to/load_your_data_into_sorting.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Load Your Own Data into a Sorting
=================================
Load your own data into a Sorting object
========================================

Why make a :code:`Sorting`?

Expand Down
2 changes: 1 addition & 1 deletion doc/how_to/process_by_channel_group.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Process a Recording by Channel Group
Process a recording by channel group
====================================

In this tutorial, we will walk through how to preprocess and sort a recording
Expand Down
2 changes: 1 addition & 1 deletion doc/how_to/viewers.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Visualize Data
Visualize data
==============

There are several ways to plot signals (raw, preprocessed) and spikes.
Expand Down
66 changes: 66 additions & 0 deletions doc/releases/0.101.2.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
.. _release0.101.2:

SpikeInterface 0.101.2 release notes
------------------------------------

4th October 2024

Minor release with bug fixes

core:

* Fix `random_spikes_selection()` (#3456)
* Expose `backend_options` at the analyzer level to set `storage_options` and `saving_options` (#3446)
* Avoid warnings in `SortingAnalyzer` (#3455)
* Fix `reset_global_job_kwargs` (#3452)
* Allow to save recordingless analyzer as (#3443)
* Fix compute analyzer pipeline with tmp recording (#3433)
* Fix bug in saving zarr recordings (#3432)
* Set `run_info` to `None` for `load_waveforms` (#3430)
* Fix integer overflow in parallel computing (#3426)
* Refactor `pandas` save load and `convert_dtypes` (#3412)
* Add spike-train based lazy `SortingGenerator` (#2227)


extractors:

* Improve IBL recording extractors by PID (#3449)

sorters:

* Get default encoding for `Popen` (#3439)

postprocessing:

* Add `max_threads_per_process` and `mp_context` to pca by channel computation and PCA metrics (#3434)

widgets:

* Fix metrics widgets for convert_dtypes (#3417)
* Fix plot motion for multi-segment (#3414)

motion correction:

* Auto-cast recording to float prior to interpolation (#3415)

documentation:

* Add docstring for `generate_unit_locations` (#3418)
* Add `get_channel_locations` to the base recording API (#3403)

continuous integration:

* Enable testing arm64 Mac architecture in the CI (#3422)
* Add kachery_zone secret (#3416)

testing:

* Relax causal filter tests (#3445)

Contributors:

* @alejoe91
* @h-mayorquin
* @jiumao2
* @samuelgarcia
* @zm711
6 changes: 6 additions & 0 deletions doc/whatisnew.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Release notes
.. toctree::
:maxdepth: 1

releases/0.101.2.rst
releases/0.101.1.rst
releases/0.101.0.rst
releases/0.100.8.rst
Expand Down Expand Up @@ -44,6 +45,11 @@ Release notes
releases/0.9.1.rst


Version 0.101.2
===============

* Minor release with bug fixes

Version 0.101.1
===============

Expand Down
12 changes: 6 additions & 6 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -124,16 +124,16 @@ test_core = [

# for github test : probeinterface and neo from master
# for release we need pypi, so this need to be commented
"probeinterface @ git+https://github.com/SpikeInterface/probeinterface.git",
"neo @ git+https://github.com/NeuralEnsemble/python-neo.git",
# "probeinterface @ git+https://github.com/SpikeInterface/probeinterface.git",
# "neo @ git+https://github.com/NeuralEnsemble/python-neo.git",
]

test_extractors = [
# Functions to download data in neo test suite
"pooch>=1.8.2",
"datalad>=1.0.2",
"probeinterface @ git+https://github.com/SpikeInterface/probeinterface.git",
"neo @ git+https://github.com/NeuralEnsemble/python-neo.git",
# "probeinterface @ git+https://github.com/SpikeInterface/probeinterface.git",
# "neo @ git+https://github.com/NeuralEnsemble/python-neo.git",
]

test_preprocessing = [
Expand Down Expand Up @@ -173,8 +173,8 @@ test = [

# for github test : probeinterface and neo from master
# for release we need pypi, so this need to be commented
"probeinterface @ git+https://github.com/SpikeInterface/probeinterface.git",
"neo @ git+https://github.com/NeuralEnsemble/python-neo.git",
# "probeinterface @ git+https://github.com/SpikeInterface/probeinterface.git",
# "neo @ git+https://github.com/NeuralEnsemble/python-neo.git",
]

docs = [
Expand Down
4 changes: 2 additions & 2 deletions src/spikeinterface/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,5 +30,5 @@
# This flag must be set to False for release
# This avoids using versioning that contains ".dev0" (and this is a better choice)
# This is mainly useful when using run_sorter in a container and spikeinterface install
DEV_MODE = True
# DEV_MODE = False
# DEV_MODE = True
DEV_MODE = False
10 changes: 8 additions & 2 deletions src/spikeinterface/core/sortinganalyzer.py
Original file line number Diff line number Diff line change
Expand Up @@ -2245,9 +2245,15 @@ def _save_data(self):
elif HAS_PANDAS and isinstance(ext_data, pd.DataFrame):
df_group = extension_group.create_group(ext_data_name)
# first we save the index
df_group.create_dataset(name="index", data=ext_data.index.to_numpy())
indices = ext_data.index.to_numpy()
if indices.dtype.kind == "O":
indices = indices.astype(str)
df_group.create_dataset(name="index", data=indices)
for col in ext_data.columns:
df_group.create_dataset(name=col, data=ext_data[col].to_numpy())
col_data = ext_data[col].to_numpy()
if col_data.dtype.kind == "O":
col_data = col_data.astype(str)
df_group.create_dataset(name=col, data=col_data)
df_group.attrs["dataframe"] = True
else:
# any object
Expand Down

0 comments on commit 7725cb2

Please sign in to comment.