Skip to content

Commit

Permalink
Merge pull request #44 from zm711/merge-datasets
Browse files Browse the repository at this point in the history
Add ability to merge datasets for plotting
  • Loading branch information
zm711 authored Oct 19, 2023
2 parents cbb5cfa + dc57444 commit 300705c
Show file tree
Hide file tree
Showing 24 changed files with 741 additions and 15 deletions.
5 changes: 5 additions & 0 deletions docs/source/API.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,9 @@ spikeanalysis
.. autoclass:: AnalogAnalysis
:members:

.. autoclass:: MergedSpikeAnalysis
:members:

.. autofunction:: kolmo_smir_stats

.. autofunction:: prevalence_counts
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
author = "Zach McKenzie"

# The full version, including alpha/beta/rc tags
release = "0.0.11"
release = "0.1.0"


# -- General configuration ---------------------------------------------------
Expand Down
14 changes: 14 additions & 0 deletions docs/source/submodules/curated_spike_analysis.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,3 +61,17 @@ To revert back to the original full set of neurons use :code:`revert_curation()`
curated_st.revert_curation()
Plotting the Data
-----------------

Since :code:`CuratedSpikeAnalysis` inherits from :code:`SpikeAnalysis` it can be used with
the :code:`SpikePlotter` class with no additional work.

.. code-block:: python
plotter = sa.SpikePlotter()
plotter.set_analysis(curated_st)
plotter.plot_zscores()
1 change: 1 addition & 0 deletions docs/source/submodules/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,4 @@ Submodules
analog_analysis
curated_spike_analysis
functions
merged_spike_analysis
52 changes: 52 additions & 0 deletions docs/source/submodules/merged_spike_analysis.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
MergedSpikeAnalysis
===================

Module for merging datasets. Once data has been curated it may be beneficial to look at a series of
animals altogether. To facilitate this the MergedSpikeAnalysis object can be used. This is done in
similar fashion to other classes

.. code-block:: python
import spikeanalysis as sa
# we start with SpikeAnalysis or CuratedSpikeAnalysis objects st1
# and st2
merged_data = sa.MergedSpikeAnalysis(spikeanalysis_list=[st1, st2], name_list=['animal1', 'animal2'])
# if we need to add an animal, st3 we can use
merged_data.add_analysis(analysis=st3, name='animal3')
# or we can use lists
merged_data.add_analysis(analysis=[st3,st4], name=['animal3', 'animal4'])
Once the data to merge is ready to be merged one can use the :code:`merge()` function. This takes
in the value :code:`psth`, which can either be set to :code:`True` to mean to load a balanced
:code:`psths` values or can be a value in a list of potential merge values, e.g. :code:`zscore` or
for example :code:`fr`.

.. code-block:: python
# will attempt to merge the psths of each dataset
merged_data.merge(psth=True)
# will attempt to merge z scores
merged_data.merge(psth=['zscore'])
Note, that the datasets to be merged must be balanced. For example a dataset with 5 neurons,
10 trials, and 200 timepoints can only be merged to another dataset with :code:`x` neurons, 10
trials, and 200 timepoints. The concatenation occurs at the level of the neuron axis (:code:`axis 0`)
so everything else must have the same dimensionality.

Finally, the merged data set can be return for use in the :code:`SpikePlotter` class.

.. code-block:: python
msa = merged_data.get_merged_data()
plotter = sa.SpikePlotter()
plotter.set_analysis(msa)
This works because the :code:`MSA` returned is a :code:`SpikeAnalysis` object that has specific
guardrails around methods which can no longer be accessed. For example, if the data was merged with
:code:`psth=True`, then z scores can be regenerated across the data with a different :code:`time_bin_ms`,
but if :code:`psth=['zscore']` was used then new z scores can be generated and the :code:`MSA` will
return a :code:`NotImplementedError`
16 changes: 16 additions & 0 deletions docs/source/submodules/spike_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,20 @@ as understandable and maintainable a weighted average is used (faster, slightly
\frac{1}{\Sigma amplitudes} \sum{amplitudes * depths}
Waveform amplitudes
^^^^^^^^^^^^^^^^^^^

Since neurons should always have the same amplitude we can assess the variation in amplitudes
as a measure of the quality of a neuron. We expect a rather gaussian distribution of amplitudes
so using the :code:`get_amplitudes()` function we assess how many spikes fall within a certain
std of the waveform data.

.. code-block:: python
spikes.get_amplitudes(std=2) # 2 std devs
Pipeline Function
-----------------

Expand All @@ -204,6 +218,8 @@ parameters to be provided. Example below will all values included.
idthres=20, # isolation distance 20--need an empiric number from your data
rpv=0.02, # 2% the amount of spikes violating the 2ms refractory period allowed
sil=0.45, # silhouette score (-1,1) with values above 0 indicates better and better clustering
amp_std=2 # number of std deviations above mean waveform amplitude to look at
amp_cutoff=0.98, # percent of neurons which must fall within amp_std deviations of mean waveform
recurated= False, # I haven't recurated my data
set_caching = True, # I want to save data for future use
depth= 500, # probe inserted 500 um deep
Expand Down
3 changes: 1 addition & 2 deletions docs/source/submodules/stimulus_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ they can be returned using :code:`get_stimulus_channels`. Finally stimulus' shou
stim_dict = stim.get_stimulus_channels()
stim.set_trial_groups(trial_dictionary=trial_dictionary) # dict as explained above
sitm.set_stimulus_names(stim_names = name_dictionary) # same keys with str values
sitm.set_stimulus_name(stim_names = name_dictionary) # same keys with str values
Train-based data
Expand Down Expand Up @@ -143,7 +143,6 @@ generated stimulus data. To load it simply requires:
stim.get_all_files()
Convenience Pipeline
--------------------

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "spikeanalysis"
version = '0.0.11'
version = '0.1.0'
authors = [{name="Zach McKenzie", email="[email protected]"}]
description = "Analysis of Spike Trains"
requires-python = ">=3.9"
Expand Down
2 changes: 2 additions & 0 deletions src/spikeanalysis/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,10 @@
from .intrinsic_plotter import IntrinsicPlotter
from .analog_analysis import AnalogAnalysis
from .curated_spike_analysis import CuratedSpikeAnalysis, read_responsive_neurons
from .merged_spike_analysis import MergedSpikeAnalysis
from .stats_functions import kolmo_smir_stats
from .plotting_functions import plot_piechart
from .utils import prevalence_counts

import importlib.metadata

Expand Down
1 change: 1 addition & 0 deletions src/spikeanalysis/analysis_utils/histogram_functions.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from __future__ import annotations
import numpy as np
from numba import jit
import numba
Expand Down
1 change: 1 addition & 0 deletions src/spikeanalysis/analysis_utils/latency_functions.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from __future__ import annotations
import numpy as np
from numba import jit
import math
Expand Down
Loading

0 comments on commit 300705c

Please sign in to comment.