Skip to content

Eigenanatomy: Overview and decomposition of current work with pointers to software

stnava edited this page Jan 10, 2016 · 3 revisions

Eigenanatomy is a general framework for reducing the dimensionality of imaging data into interpretable pieces. We motivate eigenanatomy by two ideas:

"voxels that change together should hang together."

i.e. the voxels should form "networks" across the brain.

"clustering before hypothesis testing, not hypothesis testing then clustering"

this conserves statistical power in a controllable manner by reducing the number of tests that one must perform.

Eanat ex

This algorithm allows one to interrogate data that is transformed into a common template space. The output can be used for data investigation or directly as variables in a statistical model, just like classic regions of interest. The algorithm can be applied to any continuous data that can be stored in a matrix format. Examples of such data might include voxel-wise measurements of cortical thickness or volume, perfusion imaging or various scalar images derived from diffusion tensor data. Connectivity matrices may also be passed as input. The algorithm, if given a mask, can also enforce spatial regularity on the components ( sometimes referred to as graphnet or fused lasso ) which allows the resulting eigenanatomy (pseudo-eigenvectors) to "look" like reasonable anatomical parcellations when plotted onto a brain.

Eigenanatomy key contributions

  • explicit introduction of anatomical regions of interest as priors for PCA
    • these priors are similar to a ridge penalty on PCA but we also determine sparseness based on the priors
  • the L1 sparseness operator incorporates a smoothing operation (tunable by the user) that makes sure the solutions are smooth and/or clustered
  • two alternatives for automated parameter selection
    • the users provides regions of interest which sets both sparseness and number of components
    • the algorithms (eanatSelect and eanatDef) in ANTsR analyze the data and recommend the number of components and sparseness
  • exploit the fact that eigenanatomy components are ordered by explained variance to prioritize testing (see testEanat). This function will return the number of eigenanatomy components to test, given a statistical model.
  • we also allow the user to enforce the pseudo-eigenvectors to have non-negative (or unsigned) weights, which means they can be interpreted as weighted averages.

Practical use cases

  • make "soft" regions of interest. In this case, Eanat may be used to retain interpretability of regions of interest while modeling the intrinsic variability of the data and "focusing" the ROI on the most relevant sub-portion of the region.

  • split regions of interest. In this case, Eanat can divide up the regions of interest into sub-regions that explain less correlated subsets of the region.

  • generate anatomical or network-like ROIs from data where you do not have prior hypotheses.

  • link regions of interest to identify networks (see joinEigenanatomy).

  • Eanat may be used like a segmentation algorithm where the segmentation is based on the data covariance (see eigSeg).

Technical relationships with other methods

The algorithm is a regularized version of principal component analysis. It follows along the same path as several other works. A second less often used analogy is factor analysis. i am no expert on the topic but the idea that one is trying to reconstruct the original data from a few components ( or factors ) is very similar. Additionally, eanat uses regularization to influence the (enormous) solution space such that it fits our ideas of anatomy ( if we so choose ) or to be completely data-driven but still smooth (i.e. not just a bunch of individual and unconnected noisy voxels).

There are also links between eanat/spca and independent component analysis, in particular if you whiten the data beforehand (icawhiten in ANTsR). One of our eanat implementations allows penalties on both $U$ and $V$ in the classic formulation based on reconstruction i.e. minimizing $ | X - U V^T |^2 $ where $X$ is $n \times p$, $U$ is $n \times k$ and $V$ is $p \times k$. Penalties may be of the form $\sum_i | V_i |$, $\sum_i | V_i |^+$ or $\sum_i | G_\sigma \star V_i |^+$ where the norm $| \cdot |^+$ is the $\ell_1$ norm that sets all elements below zero to zero and $G \star$ indicates convolution with a Gaussian (or other) kernel. We provide some technical discussion and comparison with related methods here.

Papers

We show that Eanat with DTI helps differentiate tau and TDP-43 variants of FTLD and identifies regions that are verified in pathology in this paper. Another paper showed that multiple modality Eanat improves classification accuracy over traditional ROIs in AD and FTD here. We also use prior-based Eanat to study MCI with resting state fMRI and show that network measurements improve classification accuracy and prediction of cognitive scores (delayed recall) here. Eigenanatomy is also useful for identifying anatomical networks that relate to cognitive deficits here.

A separate body of work focuses on computing decompositions of multiple modalities together via a related algorithm, sparse canonical correlation analysis. Papers here. This paper uses the covariation of cortical signal and fractional anisotropy from the DTI to differentiate AD from FTD. We also relate anatomical networks to deficits in several different cognitive systems here. Corey McMillan et al extended to genomics here. Much more can be done with this tool than what we have explored so far (see sparseDecom2 in ANTsR).