-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/td nirs snirf #9661
base: main
Are you sure you want to change the base?
Feature/td nirs snirf #9661
Conversation
…a type as outlined in the SNIRF specification document
…s, moments and processed (hbo/hbr)
Hello! 👋 Thanks for opening your first pull request here! ❤️ We will try to get back to you soon. 🚴🏽♂️ |
Hi @ZahraAghajani, thanks for this PR, its very exciting to have time domain functionality proposed. I would welcome the addition of supporting time domain nirs. MNE-Python already support continous wave and frequency domain nirs types. So this extension makes sense to me. @agramfort and @larsoner what do you think? With your PR requests can I make the suggestion that you dont use acronyms, or you clearly define them. It makes it hard to understand what you are trying to say. Even for someone who does fNIRS. Also, please use the PR template rather than simply deleting the text. As a first step I suggest you read through #6674 and #7717 where the other fnirs types were added. So you can get a feel for where changes will need to be made. Also mne-tools/fiff-constants#28 may be useful for you to see. Looking through the code I see you also wish to add a As a first step can you please add more detail to your proposal, specifying exactly what types you wish to add and why they are useful. For the non fNIRS people, please explain how time domain nirs differs from the other types supported in MNE, and also how they relate. Is it just Once your proposal is clearly specified we will be able to provide more detailed feedback (e.g. documentation, upstream fiff changes, what tests need to be added, test data, where the new functionality can be demonstrated etc). We will also need some test data. Do you hae access to a machine that exports TD data in NIRS format? What machine do you have? Data will be need to be added, here is the level of detail I would like to have so that we can test the reading appropriately mne-tools/mne-testing-data#51 we need information about the measurement, position of optodes, etc to test against. Can you also revert your style changes and try and follow the style that is used throughout MNE-Python. Cool stuff, |
Can you also please add a list of processing functions that would need to be added to actually use time domain NIRS in MNE? So that we can understand the scope of what's being proposed. I understand this won't all be done immediately (or even by you), but it will help for us to understand the bigger picture. References would help too. |
Agreed in principle it would be nice to support all fNIRS types. I don't have a clear picture of what processing steps are required to get data of this form to be to HbO/HbR for example. |
Thanks for the feedback! |
Yes we understand the scope of this PR was only to modify the reader. But deciding to support an entire new type of data requires a conscious decision from the community. As we need to maintain and provide support for this type. As such, it would be appreciated if in your follow up PR you can provide a clear picture of time domain NIRS (suitable for general neuroimaging scientists who don't currently do fNIRS) and provide some context as to what a typical pipeline for TD NIRS is, as I think its quite different to continuous wave? |
@rob-luke Absolutely, I will add the details you requested (as well as mention some insightful review papers) in my followup PR. As for the formatting, I had set my VScode to use autopep8 (with line length of 80, which was recommended), but I will try to revert. Stay tuned! |
Awesome. And no rush. Please reach out if you need any extra info etc. |
@Zahra-M-Aghajan big +1 on this direction. Am in need of exactly this functionality. Happy to test/review stuff when the new PR is in. |
@JohnGriffiths Happy to hear this will be useful for you.. In the meantime, please feel free to clone this branch since it, at least, allows you to read TD-NIRS and processed (oxy/deoxy signals) data types correctly and have something to work with! |
@Zahra-M-Aghajan - thanks! Have cloned and checked it out. Can read in KF .snirf file :) I was wondering what functionality you have implemented in the repo now beyond the basic i/o. Do you have any minimal scripts yet; or otherwise an idea of which mne-python fnirs things should aim to be implemented and which shouldn't? Thanks! |
I had a chat this week with @JohnGriffiths about supporting time domain NIRS (TD-NIRS) in MNE and how we can move towards this goal. First, it it's unclear, I support adding time domain NIRS to MNE. Adding new types to MNE is not a small endeavour and needs careful consideration, but I am happy to support any such efforts. I just don't have a TD-NIRS machine myself so can't be the driver of this effort. @Zahra-M-Aghajan made a great initial push, and @JohnGriffiths reports that he has been using a fork of her branch successfully. So I think the guts of adding TD-NIRS is complete, we just need to get the details sorted. To this end I think we need the following:
So really not too many blockers here. Really we just need to understand the full scope of the proposal, let's get the conversation started and push forward on this. |
Hi @rob-luke @Zahra-M-Aghajan. Rob - great chatting with you at FNIRS2021. Let's get this TD and KF MNE-NIRS show on the road! Re: test data, and a number of other things, this is hopefully a good starting point https://griffithslab.github.io/kernel-flow-tools/auto_examples/eg04r__glm_analysis.html As I said when we talked the I think there should be enough info there for you to make the test file as requested. I'm still not entirely clear what's needed there I'm afraid so probably most efficient if you could do that if poss? |
Thanks @JohnGriffiths Then once the reader works sufficiently we can add example files for analysis like you've provided.
I am 100% happy to do that. |
Re: roadmap, here are some thoughts on my end: The Kernel Flow portal currently allows us to work with three datatypes: i) 'raw' moment (up to third order) time series, ii) HbO/HbR time series per channel, and iii) HbO and HbR DOT nifti images. For mne-nirs the most important is naturally i). The example data listed in the previous comment is of type ii). We have a preliminary and somewhat raw Homer3 workflow that goes from i) to iii) (surface-based). Most natural approach for now is probably to gradually port that workflow part by part into mne-nirs. But very open to other suggestions. Relevant to this, here are some comments from Ryan Field about Kernel's in-house processing pipeline. I'm sure he wouldn't mind me relaying here:
|
Ok sure I can get you a smaller file. |
Hello @rob-luke and @JohnGriffiths ! For td-fnirs, both gated and moments data (data types 201 and 301 a la snirf specs) need to be added. This is a great paper/resource for td-nirs analyses. Also, @rob-luke , I know you did't like the "processed" data type, but it is one of the data types (99999) specified in the snirf specs documentation here. This way, you don't need to have a dedicated "HbO", "HbR", "dMean", "dVar" data, but they can all be specified by dataTypeLabels. @JohnGriffiths thanks for providing the test file! |
So you only need 201, 301, dmean and dvar? But not dskew? |
Is it possible to share the same smaller file in each of the formats you list above? I guess users will want the ability to load both i and ii? And having the data from the same exact recording will allow us to verify our processing when added in the future. And having iii will be useful for potential future integration with nilearn |
@JohnGriffiths -- related to the Kernel processing pipeline, for the two types of SNIRF files in particular, we just posted some details on community.kernel.com. The post will be updated as our processing evolves. |
Thanks @Zahra-M-Aghajan @JohnGriffiths @julien-dubois-k Once John gets us the data should I take over the coding and then get you to review? @Zahra-M-Aghajan @julien-dubois-k is there any chance I can get access to that information you linked to? It's behind a login portal. The more I understand the faster and more helpful I can be. We don't want to replicate your approach in MNE, but I would like to ensure we have the correct metadata etc. |
@julien-dubois-k I am sure @JohnGriffiths would be please to have you provide this data instead. Could you please provide 2-3 seconds of data from just a few frontal plates. But include both within and across module channels (to ensure we can load them correctly). Please include 1 or 2 triggers in the data too (also to test we load them correctly). Please provide as much information as you can about the data, so that I can add tests for it (an example of useful information is mne-tools/mne-testing-data#51) or even just a photo of the placement of the optodes, so that we can verify the positions are correct after loading in our tests. Please also include the three data types as John did so we can ensure we can load the raw data, hb data, and image space data. Thanks! |
Yes - thanks @julien-dubois-k would be great if you could provided the recording. I have tried with our KF a few times now - can't get a file with events recorded < 15 seconds (I think because start and end are trimmed), and 15 seconds of the moments files are still very big. So I think a recording with a smaller number of chans would be very helpful here. |
here are very short SNIRF files of the two types that we export on the Kernel Portal. As for the reconstruction output (NiFTI file, fMRI-like), I am not sure if this is something that MNE-NIRS wants to support, and any NiFTI would do for tests. |
Thanks @julien-dubois-k. I'm on leave at the moment. But will integrate this as soon as I return.
This is absolutely something MNE-NIRS will support (using nilearn as the backend). I am sure there are some unique aspects to the kernel nifty output (everyone writes files slightly differently, also having predominantly cortical activity is quite unique relative to fmri). So if you could share the matching nifti file that would help (e.g. for testing visualisations with matching snirf and nifti files). |
Thank you @julien-dubois-k for the test file. @rob-luke - re: nifti analyses - here are some very minimal and raw and poorly documented examples of nilearn analysis with nifti images in https://griffithslab.github.io/kernel-flow-tools/auto_examples/eg006r__nifti_vol_glm.html https://griffithslab.github.io/kernel-flow-tools/auto_examples/eg007r__nifti_vol_fc.html https://griffithslab.github.io/kernel-flow-tools/auto_examples/eg010r__nifti_surf_seedfc.html These are very custom functions that can and should be generalized in the near future. @julien-dubois-k @Zahra-M-Aghajan - side note - correct me if I'm wrong but you guys aren't planning on releasing any OS code like this, right? Would be handy to know whether or not |
@JohnGriffiths there are no short-term plans for Kernel to release OS code, so your efforts aren't redundant at the moment! |
Thanks. As mentioned in #9661 (comment)
@julien-dubois-k can you please provide some general information about how this data was acquired. So we can ensure we are reading it correctly and add tests. For example, how many channels, where are the modules placed on the head, how long was the recording, what was the sample rate, how many sources, how many detectors, what triggers are in the data, etc. preferably with screenshots from the acquisition software. The more information here the better, it means we can add more robust testing. At a minimum we MUST know the version of the hardware and software used to acquire the data. |
We need small files for fast testing. I'm happy to help, but I need you all to help by supplying a complete set of test files with complete description of the acquisition process. (I know John already did) |
This is all I have for now, and I think it should be enough for testing the reader. This is just meant to be used for tests, not any meaningful analysis. |
Thanks @julien-dubois-k ! What was the model of the hardware used to acquire the data? And what was the software version used? (Particularly software drifts over time, so we like to keep track of what versions we support) |
The hardware is Kernel Flow50. as you are well aware the SNIRF specification continues to evolve. This is our best attempt at following the current specification. |
Awesome. I think that's everything I need now. Thanks for the help. I'll ping you when progress is made.
Oh I know what you mean! And that's why I am so pedantic about collecting test data, as the implementation is varying across software and vendors etc. But hopefully the spec is now solidifying, and please report any issues you have with snirf, we can all improve it together, but I hope it's close now. |
Actually the matching nifti files isn't there. @julien-dubois-k can you please add it. |
Any chance of getting the matching nifti file? |
( FYI I have a feeling an exact matching file might not be possible since their preproc pipeline crops the start up to 5 seconds, and the recording in question is only 1.5s long with only 2 events. ) |
We need a complete test dataset for me to work with. If it needs to be 5s longer to be complete then that's acceptable. |
There is not really such a thing as a matching NIfTI file, as alluded to by @JohnGriffiths. The Kernel reconstruction pipeline currently operates on 1s windows of data, so it will be a different sampling rate. It operates on different input data (histograms of time of flight), not on the SNIRF files data (moments). So I don't believe there is a point in including NIfTI files as test files here. This PR is about testing TD-NIRS SNIRF files. I provided test files for this purpose. |
I think that's fair. @rob-luke I would suggest we revisit this Q, and nilearn nifti image analysis options in general when we have that functionality a bit more figured out and developed in `kftools' . |
I'd like to have access to all files if you want me to help with the coding. I understand the nifti is a derived quantity (just like the hbo/hbr file). But I'd like to develop the code with all pieces of information. The idea that I will be drip fed test files one by one on a need to know basis is not an appealing concept.
I’d rather not get half the files now and then some later. Why revisit something later when I can just do it in one go? And how can I figure things out if I don’t have the files? Given how hard it is to get the files now (isn’t it just hitting export?), and given my previous experience of people moving on (changing jobs etc) between PRs, I think it’s best to provide all three files at once (otherwise we end up with two unmatched half datasets). This also helps me to develop appropriate examples, visualisations, and functions. I can’t understand the reluctance to share test data. I just want to ensure we can handle all the appropriate data types and integrate them. |
This is precisely the information that we need to know and what we would learn from having access to a test file. It's these types of quirks and differences that are what we need to ensure we can handle.
Adding support for a completely new type of data (TD-NIRS) is a big step and we need to understand the implications for software design, but also support and maintenance. So we need to consider the bigger picture, not just an isolated PR. |
Ok no worries I can do this. Will prob have to manually edit the files to make them small enough. |
Manual editing is dangerous, the whole point is to get files exactly as they come from the vendor to ensure compatibility. Only do the manual approach as a last option and only if you are certain it will be identical to a natively exported file. There are lots of subtle changes that manual editing can cause (change of string encoding, change of line endings,miss alignment of fields, mismatched fields if you forget to edit one, etc) |
I'm sorry if this looks like reluctance about sharing test files. It is reluctance about broadening the scope of this PR. The scope of this PR is to support new SNIRF datatypes. The NIfTI files that we currently offer through the Kernel Portal are simply not in scope here. If people want to analyze those, they would go directly to nilearn. The volume reconstructions are not really tied to TD-NIRS (the scope of this PR), they could be made from CW-NIRS data. |
Ok. Good luck with the PR. Ping me when you are ready for a review. |
@rob-luke it sounds like we might be able to continue here without the NIfTI files, right? If so maybe a first step is to add the new constants to FIF-constants. I can make a quick PR for this if it might help move things forward! @Zahra-M-Aghajan okay for me to |
Actually a collaborator sent me a help doc from Kernel suggesting to use this branch so I'm hesitant to push to it. Might be better to keep this for people who are trying to follow those instructions and open a new PR: #11064 |
I agree with you @larsoner that leaving this as is and opening a new PR is the best plan of action. Please let me know how I can be of help. |
This PR is aimed at expanding the ability of mne-nirs to read SNIRF files with data types beyond data type 1 (continuous wave amplitude).
Changelog
read_raw_snirf
can now handle reading of the following data types: a) TD-nirs gated histograms (data type 201); b) TD-nirs moments (data type 301); and c) Processed data type (99999)Note:
Although the new data cannot be processed in the same manner as the CW data, the reading of the data itself would allow users to develop new functionalities.