Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests for windows and mac #2937

Merged
merged 93 commits into from
Jun 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
93 commits
Select commit Hold shift + click to select a range
b2164c9
great tests, good tests, the end of all test
h-mayorquin May 30, 2024
0d9536c
modify installation
h-mayorquin May 30, 2024
0741622
bash shell
h-mayorquin May 30, 2024
b97a079
eliminate mistake
h-mayorquin May 30, 2024
5ecf387
no need to prune when dealing with latest version
h-mayorquin May 30, 2024
28fea4b
no need for source environment
h-mayorquin May 30, 2024
2cf7668
no code coverage on this one
h-mayorquin May 30, 2024
16f6a5b
Merge branch 'main' into add_tests_for_all_os
h-mayorquin May 30, 2024
7a09691
added mac import fail quick
h-mayorquin May 30, 2024
e09aa61
add caching remove ubuntu
h-mayorquin May 31, 2024
89bc0d2
wrong type of cachcing, dumb mistake of mine
h-mayorquin May 31, 2024
37d93e1
correct command
h-mayorquin May 31, 2024
07460f6
separate test by module
h-mayorquin May 31, 2024
4e44122
permission for execution
h-mayorquin May 31, 2024
020551c
reduced testing
h-mayorquin May 31, 2024
1cc2bcb
forgot to avoid virtual env in core
h-mayorquin May 31, 2024
9982cfd
see origin of failure on windows
h-mayorquin May 31, 2024
1bd56e5
not fail fast to see windows mistery
h-mayorquin May 31, 2024
4b3e0a5
maybe shell issue on windows
h-mayorquin May 31, 2024
52be743
maybe shell issue on windows
h-mayorquin May 31, 2024
bf5c71e
fix windows collection by marker
h-mayorquin May 31, 2024
375e3d5
dumb marker error name
h-mayorquin May 31, 2024
282752b
faster to test mark collection this way
h-mayorquin May 31, 2024
fa7d9e8
fix mac test, enable the rest of tests
h-mayorquin May 31, 2024
1a544f8
equal assertion
h-mayorquin May 31, 2024
1e54a4b
sorter test
h-mayorquin May 31, 2024
8ed375f
try skipping core sorters
h-mayorquin May 31, 2024
e809edb
internal sorters is also failing
h-mayorquin Jun 1, 2024
f0e4c5c
restore core tests
h-mayorquin Jun 1, 2024
e56f2f0
restore conftest and container tools
h-mayorquin Jun 1, 2024
ed9ef1c
markers on widows not yet fixed
h-mayorquin Jun 1, 2024
7fb65c2
remove numba type signature (#2932)
zm711 May 30, 2024
d079d8a
fix marker collection to work on windows
h-mayorquin May 31, 2024
48f9fdb
easier fix that I do not like
h-mayorquin May 31, 2024
37ac47e
conftest fix
h-mayorquin May 31, 2024
7fd7b39
fix tee
h-mayorquin May 31, 2024
881d381
fix testing imports
h-mayorquin May 31, 2024
be00e9e
bunch of other imports
h-mayorquin May 31, 2024
8d80a6e
way more mport fixes
h-mayorquin May 31, 2024
251cd2d
even more removals
h-mayorquin May 31, 2024
7dda8c0
remove more imports
h-mayorquin May 31, 2024
701402f
even more removals
h-mayorquin May 31, 2024
d78c9c8
remove even more imports
h-mayorquin May 31, 2024
2fa06f3
remove even more imports
h-mayorquin May 31, 2024
15037b0
more pylab imports to the dustbin
h-mayorquin May 31, 2024
58a4ac0
more matplotlib terrible things]
h-mayorquin May 31, 2024
e366a45
isocut requires numba
h-mayorquin May 31, 2024
0fd14c4
more pandas and matplotlib
h-mayorquin May 31, 2024
ac6fa5a
more imports
h-mayorquin May 31, 2024
b394de1
truncated sv on clustering circus
h-mayorquin May 31, 2024
c08ff54
triage imports
h-mayorquin May 31, 2024
25a950e
more matplotlib
h-mayorquin May 31, 2024
6af7680
fix imports
h-mayorquin May 31, 2024
0b89fd7
restore nwb issue
h-mayorquin May 31, 2024
d845d23
Remove mearec from testing functions (#2930)
chrishalcrow May 31, 2024
8af34b7
missing scipy import
h-mayorquin May 31, 2024
2bdfe68
fix scipy import0
h-mayorquin May 31, 2024
2aa27c2
fix import error
h-mayorquin May 31, 2024
34c57c5
missing signal import
h-mayorquin May 31, 2024
c66f7d6
merge marker fix
h-mayorquin Jun 1, 2024
8a5366f
dumb comment that I left
h-mayorquin Jun 1, 2024
78edfb4
Merge branch 'main' into add_tests_for_all_os
h-mayorquin Jun 1, 2024
a7417bb
try extractors
h-mayorquin Jun 1, 2024
aef1e96
Merge remote-tracking branch 'refs/remotes/origin/add_tests_for_all_o…
h-mayorquin Jun 1, 2024
8e36972
try running only datalad check
h-mayorquin Jun 6, 2024
77d6e16
work in progress
h-mayorquin Jun 6, 2024
89c0e57
now working
h-mayorquin Jun 13, 2024
5ea1141
Merge branch 'main' into try_poo_instead_of_datalad
h-mayorquin Jun 13, 2024
f139f32
enable hashing
h-mayorquin Jun 14, 2024
0199677
Merge branch 'try_poo_instead_of_datalad' into add_tests_for_all_os
h-mayorquin Jun 14, 2024
b24a2b8
enable pooch for testing windows
h-mayorquin Jun 14, 2024
fda504a
windows posix fix
h-mayorquin Jun 14, 2024
238239c
add linux to tests
h-mayorquin Jun 14, 2024
29a2da9
plexon has a bug
h-mayorquin Jun 14, 2024
837eb85
internal sorters passing on windows
h-mayorquin Jun 14, 2024
a6a0e6e
skip bad test on windows
h-mayorquin Jun 14, 2024
73410f0
both sorter tests now passing
h-mayorquin Jun 14, 2024
594da45
skip plexon sorting test
h-mayorquin Jun 14, 2024
08dd62a
test simple datalad installation
h-mayorquin Jun 14, 2024
6f4e06a
Merge branch 'main' into add_tests_for_all_os
h-mayorquin Jun 14, 2024
b9d92f9
use catch for 500 errors simplify datalad instaltion even further
h-mayorquin Jun 14, 2024
f641872
restore editable install
h-mayorquin Jun 14, 2024
62286e7
add caching
h-mayorquin Jun 14, 2024
1725190
temporarily generation is not working, my fault
h-mayorquin Jun 14, 2024
2e3cb1c
improve hashing
h-mayorquin Jun 14, 2024
844c65d
Merge branch 'main' into add_tests_for_all_os
h-mayorquin Jun 18, 2024
8b3f4e4
Merge branch 'main' into add_tests_for_all_os
h-mayorquin Jun 19, 2024
b71bbd8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 19, 2024
34c0f8e
Update src/spikeinterface/extractors/tests/test_datalad_downloading.py
h-mayorquin Jun 19, 2024
5441ff4
Update src/spikeinterface/core/datasets.py
h-mayorquin Jun 19, 2024
2494c27
forgotten bash
h-mayorquin Jun 24, 2024
4905a0b
lower and higher versions
h-mayorquin Jun 25, 2024
bad309d
use restore to only restore the caches
h-mayorquin Jun 25, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion .github/run_tests.sh
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
#!/bin/bash

MARKER=$1
NOVIRTUALENV=$2

# Check if the second argument is provided and if it is equal to --no-virtual-env
if [ -z "$NOVIRTUALENV" ] || [ "$NOVIRTUALENV" != "--no-virtual-env" ]; then
source $GITHUB_WORKSPACE/test_env/bin/activate
fi

source $GITHUB_WORKSPACE/test_env/bin/activate
pytest -m "$MARKER" -vv -ra --durations=0 --durations-min=0.001 | tee report.txt; test ${PIPESTATUS[0]} -eq 0 || exit 1
echo "# Timing profile of ${MARKER}" >> $GITHUB_STEP_SUMMARY
python $GITHUB_WORKSPACE/.github/build_job_summary.py report.txt >> $GITHUB_STEP_SUMMARY
Expand Down
129 changes: 129 additions & 0 deletions .github/workflows/all-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
name: Complete tests

on:
workflow_dispatch:
schedule:
- cron: "0 12 * * 0" # Weekly on Sunday at noon UTC
pull_request:
types: [synchronize, opened, reopened]
branches:
- main

env:
KACHERY_CLOUD_CLIENT_ID: ${{ secrets.KACHERY_CLOUD_CLIENT_ID }}
KACHERY_CLOUD_PRIVATE_KEY: ${{ secrets.KACHERY_CLOUD_PRIVATE_KEY }}

concurrency: # Cancel previous workflows on the same pull request
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
run:
name: ${{ matrix.os }} Python ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.12"] # Lower and higher versions we support
os: [macos-13, windows-latest, ubuntu-latest]
steps:
- uses: actions/checkout@v4
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
# cache: 'pip' # caching pip dependencies

- name: Get current hash (SHA) of the ephy_testing_data repo
id: repo_hash
run: |
echo "dataset_hash=$(git ls-remote https://gin.g-node.org/NeuralEnsemble/ephy_testing_data.git HEAD | cut -f1)"
echo "dataset_hash=$(git ls-remote https://gin.g-node.org/NeuralEnsemble/ephy_testing_data.git HEAD | cut -f1)" >> $GITHUB_OUTPUT
shell: bash
- name: Cache datasets
id: cache-datasets
uses: actions/cache/restore@v4
with:
path: ~/spikeinterface_datasets
key: ${{ runner.os }}-datasets-${{ steps.repo_hash.outputs.dataset_hash }}
restore-keys: ${{ runner.os }}-datasets

- name: Install packages
run: |
git config --global user.email "[email protected]"
git config --global user.name "CI Almighty"
pip install -e .[test,extractors,streaming_extractors,full]
pip install tabulate
shell: bash

- name: Installad datalad
run: |
pip install datalad-installer
if [ ${{ runner.os }} = 'Linux' ]; then
datalad-installer --sudo ok git-annex --method datalad/packages
elif [ ${{ runner.os }} = 'macOS' ]; then
datalad-installer --sudo ok git-annex --method brew
elif [ ${{ runner.os }} = 'Windows' ]; then
datalad-installer --sudo ok git-annex --method datalad/git-annex:release
fi
pip install datalad
git config --global filter.annex.process "git-annex filter-process" # recommended for efficiency
shell: bash

- name: Set execute permissions on run_tests.sh
run: chmod +x .github/run_tests.sh
shell: bash

- name: Test core
run: pytest -m "core"
shell: bash

- name: Test extractors
env:
HDF5_PLUGIN_PATH: ${{ github.workspace }}/hdf5_plugin_path_maxwell
run: pytest -m "extractors"
shell: bash

- name: Test preprocessing
run: ./.github/run_tests.sh "preprocessing and not deepinterpolation" --no-virtual-env
shell: bash

- name: Test postprocessing
run: ./.github/run_tests.sh postprocessing --no-virtual-env
shell: bash

- name: Test quality metrics
run: ./.github/run_tests.sh qualitymetrics --no-virtual-env
shell: bash

- name: Test comparison
run: ./.github/run_tests.sh comparison --no-virtual-env
shell: bash

- name: Test core sorters
run: ./.github/run_tests.sh sorters --no-virtual-env
shell: bash

- name: Test internal sorters
run: ./.github/run_tests.sh sorters_internal --no-virtual-env
shell: bash

- name: Test curation
run: ./.github/run_tests.sh curation --no-virtual-env
shell: bash

- name: Test widgets
run: ./.github/run_tests.sh widgets --no-virtual-env
shell: bash

- name: Test exporters
run: ./.github/run_tests.sh exporters --no-virtual-env
shell: bash

- name: Test sortingcomponents
run: ./.github/run_tests.sh sortingcomponents --no-virtual-env
shell: bash

- name: Test generation
run: ./.github/run_tests.sh generation --no-virtual-env
shell: bash
7 changes: 3 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -137,10 +137,9 @@ test = [

# for sortingview backend
"sortingview",

# recent datalad need a too recent version for git-annex
# so we use an old one here
"datalad==0.16.2",
# Download data
"pooch>=1.8.2",
"datalad>=1.0.2",

## install tridesclous for testing ##
"tridesclous>=1.6.8",
Expand Down
56 changes: 40 additions & 16 deletions src/spikeinterface/core/datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,56 +14,80 @@ def download_dataset(
remote_path: str = "mearec/mearec_test_10s.h5",
local_folder: Path | None = None,
update_if_exists: bool = False,
unlock: bool = False,
) -> Path:
"""
Function to download dataset from a remote repository using datalad.
Function to download dataset from a remote repository using a combination of datalad and pooch.

Pooch is designed to download single files from a remote repository.
Because our datasets in gin sometimes point just to a folder, we still use datalad to download
a list of all the files in the folder and then use pooch to download them one by one.

Parameters
----------
repo : str, default: "https://gin.g-node.org/NeuralEnsemble/ephy_testing_data"
The repository to download the dataset from
remote_path : str, default: "mearec/mearec_test_10s.h5"
A specific subdirectory in the repository to download (e.g. Mearec, SpikeGLX, etc)
local_folder : str, default: None
local_folder : str, optional
The destination folder / directory to download the dataset to.
defaults to the path "get_global_dataset_folder()" / f{repo_name} (see `spikeinterface.core.globals`)
if None, then the path "get_global_dataset_folder()" / f{repo_name} is used (see `spikeinterface.core.globals`)
update_if_exists : bool, default: False
Forces re-download of the dataset if it already exists, default: False
unlock : bool, default: False
Use to enable the edition of the downloaded file content, default: False

Returns
-------
Path
The local path to the downloaded dataset

Notes
-----
The reason we use pooch is because have had problems with datalad not being able to download
data on windows machines. Especially in the CI.

See https://handbook.datalad.org/en/latest/intro/windows.html
"""
import pooch
import datalad.api
from datalad.support.gitrepo import GitRepo

if local_folder is None:
base_local_folder = get_global_dataset_folder()
base_local_folder.mkdir(exist_ok=True, parents=True)
local_folder = base_local_folder / repo.split("/")[-1]
local_folder.mkdir(exist_ok=True, parents=True)
else:
if not local_folder.is_dir():
local_folder.mkdir(exist_ok=True, parents=True)

local_folder = Path(local_folder)
if local_folder.exists() and GitRepo.is_valid_repo(local_folder):
dataset = datalad.api.Dataset(path=local_folder)
# make sure git repo is in clean state
repo = dataset.repo
if update_if_exists:
repo.call_git(["checkout", "--force", "master"])
dataset.update(merge=True)
else:
dataset = datalad.api.install(path=local_folder, source=repo)

local_path = local_folder / remote_path
dataset_status = dataset.status(path=remote_path, annex="simple")

# Download only files that also have a git-annex key
dataset_status_files = [status for status in dataset_status if status["type"] == "file"]
dataset_status_files = [status for status in dataset_status_files if "key" in status]

# This downloads the data set content
dataset.get(remote_path)
git_annex_hashing_algorithm = {"MD5E": "md5"}
for status in dataset_status_files:
hash_algorithm = git_annex_hashing_algorithm[status["backend"]]
hash = status["keyname"].split(".")[0]
known_hash = f"{hash_algorithm}:{hash}"
fname = Path(status["path"]).relative_to(local_folder)
url = f"{repo}/raw/master/{fname.as_posix()}"
expected_full_path = local_folder / fname

# Unlock files of a dataset in order to be able to edit the actual content
if unlock:
dataset.unlock(remote_path, recursive=True)
full_path = pooch.retrieve(
url=url,
fname=str(fname),
path=local_folder,
known_hash=known_hash,
progressbar=True,
)
assert full_path == str(expected_full_path)

return local_path
5 changes: 3 additions & 2 deletions src/spikeinterface/extractors/tests/common_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,9 @@ class CommonTestSuite:
downloads = []
entities = []

def setUp(self):
for remote_path in self.downloads:
@classmethod
def setUpClass(cls):
for remote_path in cls.downloads:
download_dataset(repo=gin_repo, remote_path=remote_path, local_folder=local_folder, update_if_exists=True)


Expand Down
13 changes: 5 additions & 8 deletions src/spikeinterface/extractors/tests/test_datalad_downloading.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,12 @@
import pytest
from spikeinterface.core import download_dataset
import importlib.util

try:
import datalad

HAVE_DATALAD = True
except:
HAVE_DATALAD = False


@pytest.mark.skipif(not HAVE_DATALAD, reason="No datalad")
@pytest.mark.skipif(
importlib.util.find_spec("pooch") is None or importlib.util.find_spec("datalad") is None,
reason="Either pooch or datalad is not installed",
)
def test_download_dataset():
repo = "https://gin.g-node.org/NeuralEnsemble/ephy_testing_data"
remote_path = "mearec"
Expand Down
9 changes: 6 additions & 3 deletions src/spikeinterface/extractors/tests/test_neoextractors.py
Original file line number Diff line number Diff line change
Expand Up @@ -351,8 +351,10 @@ def test_pickling(self):
pass


# We run plexon2 tests only if we have dependencies (wine)
@pytest.mark.skipif(not has_plexon2_dependencies(), reason="Required dependencies not installed")
# TODO solve plexon bug
@pytest.mark.skipif(
not has_plexon2_dependencies() or platform.system() == "Windows", reason="There is a bug on windows"
)
class Plexon2RecordingTest(RecordingCommonTestSuite, unittest.TestCase):
ExtractorClass = Plexon2RecordingExtractor
downloads = ["plexon"]
Expand All @@ -361,6 +363,7 @@ class Plexon2RecordingTest(RecordingCommonTestSuite, unittest.TestCase):
]


@pytest.mark.skipif(not has_plexon2_dependencies() or platform.system() == "Windows", reason="There is a bug")
@pytest.mark.skipif(not has_plexon2_dependencies(), reason="Required dependencies not installed")
class Plexon2EventTest(EventCommonTestSuite, unittest.TestCase):
ExtractorClass = Plexon2EventExtractor
Expand All @@ -370,7 +373,7 @@ class Plexon2EventTest(EventCommonTestSuite, unittest.TestCase):
]


@pytest.mark.skipif(not has_plexon2_dependencies(), reason="Required dependencies not installed")
@pytest.mark.skipif(not has_plexon2_dependencies() or platform.system() == "Windows", reason="There is a bug")
class Plexon2SortingTest(SortingCommonTestSuite, unittest.TestCase):
ExtractorClass = Plexon2SortingExtractor
downloads = ["plexon"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def test_compute_for_all_spikes(self, sparse):
ext.run_for_all_spikes(pc_file2, chunk_size=10000, n_jobs=2)
all_pc2 = np.load(pc_file2)

assert np.array_equal(all_pc1, all_pc2)
np.testing.assert_almost_equal(all_pc1, all_pc2, decimal=3)

def test_project_new(self):
"""
Expand Down
5 changes: 4 additions & 1 deletion src/spikeinterface/sorters/tests/test_container_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from spikeinterface import generate_ground_truth_recording

from spikeinterface.sorters.container_tools import find_recording_folders, ContainerClient, install_package_in_container
import platform

ON_GITHUB = bool(os.getenv("GITHUB_ACTIONS"))

Expand Down Expand Up @@ -58,7 +59,9 @@ def test_find_recording_folders(setup_module):
assert str(f2[0]) == str((cache_folder / "multi").absolute())

# in this case the paths are in 3 separate drives
assert len(f3) == 3
# Not a good test on windows because all the paths resolve to C when absolute in `find_recording_folders`
if platform.system() != "Windows":
assert len(f3) == 3


@pytest.mark.skipif(ON_GITHUB, reason="Docker tests don't run on github: test locally")
Expand Down
Loading