You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all — I'm trying to sync some npx data to an arduino. Based on reading through previous issues, it looks like the best practice for aligning data that isn't sync'd online is to rely on sample_numbers.npy from the continuous folder (as described in #586). However, I noticed three issues when I went to try offline synchronization on some recent recordings. I'm using a rough "barcode" sync method, where I just randomly flip a bit up/down every 50 msec.
the length of the memmap'd continuous.dat file doesn't always match the length of the timestamps / sample_numbers files. I have one recording where they're off by 4199, and two other recordings where they match exactly. I think this is negligible for the larger point of issue, which is syncing, but not sure it's intended behavior.
the sample_numbers.npy from the continuous folder sometimes has large jumps downwards at the very beginning of the recording, regardless of whether or not issue 1) is occuring. In the recording with issue 1), it jumps from 69299863 to 0 after 768 samples; in the second recording, it jumps from 485541 to 0 after 677 samples; in the third recording, it starts at 1251161 and just counts up no problem.
probably related, the sample_numers.npy from the events folder also either has / doesn't have skips in the same files.
My plan is to use spikeinterface's time vector, which is derived from t_start (which seems to be the first timestamp in the timestamps file / the first sample number in the sample_numbers file divided by the sampling rate, all of which match for all of these recordings) and is the length of the continuous.dat file. So I would think that all I need to do is 1) figure out how to index from the events version of sample_numbers.npy into the continuous.dat file, 2) grab the times associated with those from spikeinterface.
I've been able to accomplish 1) by essentially brute-force matching the sample numbers between the events and the continuous sample_numers.npy files, assuming that they're being drawn from the same counter that is somehow resetting. (I only have to brute force until the jump happens, then I can just find an offset value and assert that there are no gaps in sampling; otherwise brute forcing the whole thing would take ~20 minutes). I'm able to sync all of my recordings this way. Nonetheless, it doesn't really seem like an ideal situation.
Let me know if there's a good way to avoid this happening, or if this is a bug that needs squashing. Thanks!
The text was updated successfully, but these errors were encountered:
jonahpearl
changed the title
Issues aligning data in continuous.dat and sample_numbers.npy
Inconsistencies in shape, and negative jumps, in sample_numbers.npy
Aug 27, 2024
Hi Jonah -- it sounds like you're going about things the right way, given what you've observed. It's definitely safe to assume the sample numbers associated with events will exactly what's in the continuous data.
We are not sure why discontiguous sample numbers are sometimes added to the beginning of the file -- there must be a buffer that is getting improperly flushed at the start of the recording. Letting acquisition run for a few seconds before hitting record should prevent this from happening, but we will also figure out the root cause and release a fix.
Hi all — I'm trying to sync some npx data to an arduino. Based on reading through previous issues, it looks like the best practice for aligning data that isn't sync'd online is to rely on
sample_numbers.npy
from thecontinuous
folder (as described in #586). However, I noticed three issues when I went to try offline synchronization on some recent recordings. I'm using a rough "barcode" sync method, where I just randomly flip a bit up/down every 50 msec.continuous.dat
file doesn't always match the length of the timestamps / sample_numbers files. I have one recording where they're off by 4199, and two other recordings where they match exactly. I think this is negligible for the larger point of issue, which is syncing, but not sure it's intended behavior.sample_numbers.npy
from thecontinuous
folder sometimes has large jumps downwards at the very beginning of the recording, regardless of whether or not issue 1) is occuring. In the recording with issue 1), it jumps from 69299863 to 0 after 768 samples; in the second recording, it jumps from 485541 to 0 after 677 samples; in the third recording, it starts at 1251161 and just counts up no problem.sample_numers.npy
from theevents
folder also either has / doesn't have skips in the same files.My plan is to use spikeinterface's time vector, which is derived from t_start (which seems to be the first timestamp in the timestamps file / the first sample number in the sample_numbers file divided by the sampling rate, all of which match for all of these recordings) and is the length of the continuous.dat file. So I would think that all I need to do is 1) figure out how to index from the
events
version ofsample_numbers.npy
into thecontinuous.dat
file, 2) grab the times associated with those from spikeinterface.I've been able to accomplish 1) by essentially brute-force matching the sample numbers between the events and the continuous
sample_numers.npy
files, assuming that they're being drawn from the same counter that is somehow resetting. (I only have to brute force until the jump happens, then I can just find an offset value and assert that there are no gaps in sampling; otherwise brute forcing the whole thing would take ~20 minutes). I'm able to sync all of my recordings this way. Nonetheless, it doesn't really seem like an ideal situation.Let me know if there's a good way to avoid this happening, or if this is a bug that needs squashing. Thanks!
The text was updated successfully, but these errors were encountered: