Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test to check unit structure in quality metric calculator output #2973

Merged

Conversation

chrishalcrow
Copy link
Collaborator

When editing quality metric calculators it's easy to get your unit_ids mixed up: this PR adds a test which checks that your unit_ids are labelled as expected.

This test is mostly important for keeping calm when editing quality metric calculators. It's especially important because there's a lot of potential speed-up to be had (and potential for things to go wrong) by replacing for loops over units with numpy slicing. The test checks that the ouputted dictionaries are using the unit_ids and not the unit indices.

Had to add a new sorting analyzer with more "interesting" unit_ids. I've made it very short and am planning to port the other tests over to it (unless the long recording is needed), which will speed up testing.

Adding the test revealed something you could almost call a bug: compute_refrac_period_violations returned the unit_ids in the order of the sorting object, rather than the order they were given in the unit_id argument. There are two reasons this doesn't matter:

  • the output is a dictionary, so the order almost never matters.
  • the compute method applied to an extension always uses all non-empty units, and can't accept a user-inputted unit_ids, so their order is always the same as the sorting object when used in this way.

This type of "bug" is not what the test is designed to look for - just a little surprise when it didn't immediately pass!

@zm711
Copy link
Collaborator

zm711 commented Jun 4, 2024

A few quality metrics rely on having enough spikes and some rely onbin size, so I think you'd just have to make sure that the recording being used is long enough for those tests to not fail.

@alejoe91 alejoe91 added the qualitymetrics Related to qualitymetrics module label Jun 5, 2024
@samuelgarcia samuelgarcia added this to the 0.101.0 milestone Jun 12, 2024
@@ -1085,10 +1085,10 @@ def compute_drift_metrics(
spikes_in_bin = spikes_in_segment[i0:i1]
spike_locations_in_bin = spike_locations_in_segment[i0:i1][direction]

for unit_ind in np.arange(len(unit_ids)):
mask = spikes_in_bin["unit_index"] == unit_ind
for unit_index, unit_id in enumerate(unit_ids):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If unit_ids is not the full list the unit_index variable is nt a good choici of name and explain the previous mistake.
I would use i or ind

Co-authored-by: Alessio Buccino <[email protected]>
@samuelgarcia
Copy link
Member

Hi Chris thanks for the catch.

I have the feeling that this kinds of mistake could be at several places in metrics when unit_ids is not None in the signature.
because for unit_index, unit_id enurmate(unit_ids) is wrong when unit_ids is not the full list.

We propose to merge this now. But could you make an even stronger tests using unit_ids not None and also check this everywhere enumerate is not wrong in the code ?

@chrishalcrow
Copy link
Collaborator Author

Hi Chris thanks for the catch.

I have the feeling that this kinds of mistake could be at several places in metrics when unit_ids is not None in the signature. because for unit_index, unit_id enurmate(unit_ids) is wrong when unit_ids is not the full list.

We propose to merge this now. But could you make an even stronger tests using unit_ids not None and also check this everywhere enumerate is not wrong in the code ?

Yup, these tests check all the misc_metrics so are a good start! Will hunt for more places to test.

PS: I'm on a train with no WiFi, so can't make any commits this morning.

@alejoe91 alejoe91 merged commit 211c222 into SpikeInterface:main Jul 3, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
qualitymetrics Related to qualitymetrics module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants