Skip to content

Commit

Permalink
[ENH] Speed up read_raw_neuralynx() on large datasets with many gaps (
Browse files Browse the repository at this point in the history
#12371)

Co-authored-by: Eric Larson <[email protected]>
  • Loading branch information
KristijanArmeni and larsoner authored Jan 25, 2024
1 parent f060e6b commit 1b0d208
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 5 deletions.
1 change: 1 addition & 0 deletions doc/changes/devel/12371.newfeature.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Speed up :func:`mne.io.read_raw_neuralynx` on large datasets with many gaps, by `Kristijan Armeni`_.
7 changes: 2 additions & 5 deletions mne/io/neuralynx/neuralynx.py
Original file line number Diff line number Diff line change
Expand Up @@ -223,11 +223,8 @@ def __init__(
[np.full(shape=(n,), fill_value=i) for i, n in enumerate(sizes_sorted)]
)

# construct Annotations()
gap_seg_ids = np.unique(sample2segment)[gap_indicator]
gap_start_ids = np.array(
[np.where(sample2segment == seg_id)[0][0] for seg_id in gap_seg_ids]
)
# get the start sample index for each gap segment ()
gap_start_ids = np.cumsum(np.hstack([[0], sizes_sorted[:-1]]))[gap_indicator]

# recreate time axis for gap annotations
mne_times = np.arange(0, len(sample2segment)) / info["sfreq"]
Expand Down

0 comments on commit 1b0d208

Please sign in to comment.