-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the issue of recording saving. #2826
Comments
Hi @yyyaaaaaaa What spikeinterface version are you using? |
I'm using version 0.100.6, and this is information about my preprocessed recording. CommonReferenceRecording: 1012 channels - 20.0kHz - 1 segments - 6,000,200 samples |
Can you try with |
After waiting for a while, it started working normally. However, it seems a bit slow. Is this normal? write_binary_recording with n_jobs = 1 and chunk_size = 20000 |
With 1 job it's supposed to be slow. Can you try to gradually increase it? Does it work with 2? |
It's not working. So far, no relevant information has been printed out. |
Just for our provenance this is now the 4th case of n_jobs >1 not working on save binary. 3 times on Windows. #2820 is another example. I don't understand the deeper levels of the chunking to try to troubleshoot this. But I think this has to do with some nitty-gritty environment issues on specific computers. For example for my labmate's computer it works in |
Yes, I'm using a Windows system and running my script through PyCharm. Fortunately, setting n_jobs = 1 allows me to work normally, although it's a bit slower :) |
One last questino that will be useful for us. What format is your original data? That is, what format is your original recording? Also, your chunks are I think too smal for writing. This is a deep issue but I suggest to try to two things.
I find the later unlikely because your process sould be killed at some point but maybe is overswapping and that's why it becomes so slow. |
Suggestion: |
That's our default so if that is the case then it's really our fault for choosing that :) @h-mayorquin the one linux person who had this fail had an issue with numpy backend stuff see this comment here |
I am linking the numpy issue since it is broken in the other thread: Unfortunatley, I don't think we can safeguard against bugs at the numpy/Intel compilers level. |
But in this case I think we need to make sure that people have the option to use n_jobs=1 at each stage. And @alejoe91 and I had previously talked about adding a troubleshooting in the docs to let people know they could look into this for their own computer if they want to try to use the multiprocessing. |
I was talking about the CI. I think that testing for that specific case is too granular even for my preferences, specially if it just hangs. I think that |
Here's another saving issue on Windows I had forgotten about: #1922. When I have time I might open a separate global issue so I can try to summarize the state of problems with multiprocessing in the repo. |
For more info on this issue. One of my labmates can do multiprocessing from the terminal, but not from an IDE. Since this user is using an IDE maybe there is a problem there.... Not sure how to solve that though. |
Which IDE does your colleague uses?
|
Windows 11 (so Using an ipython repl in the same conda env works fine for tqdm and multiprocessing. |
Is the error still present if you try to write a generate_recording() ? (this would allow us to discard a reading problem so we can focus on the core functions intead of the input format). Tomorrow I will alocate some time to solve the plexon as it was a request from my boss. While on windows I can try to see if I can reproduce the issue but I need to know the input format. |
Will test later today. It is only on a specific computer so I have to wait for the person to come in... |
I got an update. So using |
OK, and the input previously was intan? (assuming this from your lab). |
Yep. |
OK, once this is done: We should give the intan version you guys are using the same treatment than this to make it smother: |
Let's close this. This discussion is too long in the past and many things have changed. If the problem appears again we can focus on it. |
This short answer is under a lisense I have deposite in 1857. |
What? x D |
When I try to save preprocessed data after extracting the recordings, my terminal always seems unresponsive, as if the entire code has stopped running. However, when I check the saved folder, it already exists. But when I try to extract using the si.load_extractor() , it throws the following error.
Spikeinterface version is V0.100.6.
Traceback (most recent call last):
File "E:\y\python_files\sort\test.py", line 101, in
recording_rec = si.load_extractor(DATA_DIRECTORY / preprocessed)
File "D:\software\Anaconda3\envs\kilosort4\lib\site-packages\spikeinterface\core\base.py", line 1146, in load_extractor
return BaseExtractor.load(file_or_folder_or_dict, base_folder=base_folder)
File "D:\software\Anaconda3\envs\kilosort4\lib\site-packages\spikeinterface\core\base.py", line 781, in load
raise ValueError(f"This folder is not a cached folder {file_path}")
ValueError: This folder is not a cached folder H:\MEA_DATA_binary\yy\20240130\20240130_19531_D13\240130\19531\Network\000015\binary_for_ks4
Here's the script I'm using.
recording_f = bandpass_filter(recording=recording, freq_min=300, freq_max=6000)
recording_cmr = common_reference(recording=recording_f, operator="median")
recording_sub = recording_cmr
preprocessed = "binary_for_ks4"
job_kwargs = dict(n_jobs=30, chunk_duration='1s', progress_bar=True)
rec_saved = recording_sub.save(folder=DATA_DIRECTORY/preprocessed, overwrite=True, format='binary', **job_kwargs)
The text was updated successfully, but these errors were encountered: