Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Less aggressive masking for first echo #827

Open
handwerkerd opened this issue Nov 15, 2021 · 5 comments
Open

Less aggressive masking for first echo #827

handwerkerd opened this issue Nov 15, 2021 · 5 comments
Labels
breaking change WIll make a non-trivial change to outputs effort: high More than 40h total work enhancement issues describing possible enhancements to the project impact: medium Improves code/documentation functionality for some users priority: low issues that are not urgent

Comments

@handwerkerd
Copy link
Member

Summary

There are several anecdotes where CSF and even white matter voxels are being masked. This isn't necessarily a problem for tedana denoising, but may affect how tedana interacts with other pipelines. (For example: NBCLab/power-replication#14 )

These voxels probably shouldn't be masked. Tedana masking is fairly simple and focuses on voxels with relatively lower raw magnitudes. We could see if there is a way to still mask out dropout regions while retaining the ventricles.

Next Steps

  • Agree this is something worth addressing
  • Look at other masking methods to see if there's an existing light-weight method
@handwerkerd handwerkerd added enhancement issues describing possible enhancements to the project priority: medium Should get addressed soon effort: high More than 40h total work impact: medium Improves code/documentation functionality for some users breaking change WIll make a non-trivial change to outputs labels Nov 15, 2021
@tsalo tsalo changed the title Less agressive masking for first echo Less aggressive masking for first echo Nov 15, 2021
@tsalo
Copy link
Member

tsalo commented Nov 15, 2021

What I ended up doing in the fork I'm using for that analysis is to assume that all voxels in any user-provided mask has good data in at least the first two echoes. It probably means that we end up relying on our T2* value restrictions more in those voxels, which is unfortunate, but given how we perform optimal combination, it shouldn't be a big problem.

@handwerkerd handwerkerd added priority: low issues that are not urgent and removed priority: medium Should get addressed soon labels Nov 15, 2021
@handwerkerd
Copy link
Member Author

For your fork, what if you assume the first echo only is good and you'd remove voxels that drop too much from the first to second echo? Right now, we're independently creating a mask for each echo. In the revised version, you'd find the median value for each echo & get median_pchange = (median_echo1-median_echo2)/(median echo1). If pchange in a voxel > X*median_pchange (say for X=2) then assuming you're seeing significant dropout rather than a decay curve and mask that voxel.

This is something I could see playing with in tedana proper, but I'm not sure how to prioritize. I originally set this as medium priority, but I just shifted to low priority. Do you think this would be better to address sooner rather than later or is it more of an edge case?

@tsalo
Copy link
Member

tsalo commented Nov 30, 2021

For your fork, what if you assume the first echo only is good and you'd remove voxels that drop too much from the first to second echo?

My biggest issue in my analysis was that CSF and WM voxels were being removed, which prevented me from using aCompCor. Dropping any voxels from the fMRIPrep brain mask would have been a problem.

In the revised version, you'd find the median value for each echo & get median_pchange = (median_echo1-median_echo2)/(median echo1). If pchange in a voxel > X*median_pchange (say for X=2) then assuming you're seeing significant dropout rather than a decay curve and mask that voxel.

This sounds similar to #312. I think looking at the signal decay instead of each echo separately could be a great improvement for the adaptive mask, but I think we'd also still end up with issues in the CSF and WM, since I've seen signal curves that don't actually decrease regularly there. Perhaps the two-compartment model (#212) would solve that issue, but I believe that would require a lot of echoes. At the moment, just assuming that all voxels in the brain mask are at least good enough for optimal combination (and leaning heavily on our T2* restrictions) seems to work well.

@handwerkerd
Copy link
Member Author

Another lightweight option would be to do region filling (either by default or as another option). This would mean that no voxel would be removed from the mask that is surrounded by voxels within the mask. For a whole-brain volume, that would almost definitely retain the lateral ventricles and should also retain much of white matter. I haven't seen white matter masking quite like you're describing so I'm not sure if this will reliably solve that issue.

@tsalo
Copy link
Member

tsalo commented Dec 3, 2022

I like the idea of region-filling using the T2* and S0 estimates.

EDIT: What about scipy.interpolate.NearestNDInterpolator?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking change WIll make a non-trivial change to outputs effort: high More than 40h total work enhancement issues describing possible enhancements to the project impact: medium Improves code/documentation functionality for some users priority: low issues that are not urgent
Projects
None yet
Development

No branches or pull requests

2 participants