ROI Papers

Useful Papers - Region-of-Interest (ROI) Analysis

Also check out SegmentationPapers (e.g., Yushkevich et al. (2006)) for automated ROI-generation methods.

Primary:

Nieto-Castanon et al. (2003), "Region of interest based analysis of functional imaging data," NeuroImage 19, 1303-1316 PDF

Summary: Arguing that standard voxelwise statistical methods provide no guarantees about the mapping between function and a particular brain (as opposed to voxel) location, Nieto-Castanon and colleagues propose a GLM-based statistical analysis that operates on signal from whole, anatomically specified, ROIs, rather than individual voxels. They point out, crucially, that even in normalized brains there is little to no overlap between anatomically-marked ROIs over more than a pair of people.

Bottom line: If one point of brain imaging is associating function with particular anatomical locations, why aren't why analyzing data in terms of anatomical locations? Here's how it can be done in a reasoned statistical fashion.

Brett et al. (2002), "The problem of functional localization in the human brain," Nature Reviews Neuroscience 3, 243-249 PDF

Summary: A nice review of many of the problems plaguing any kind of regional labeling of functional activations. Brett et. al introduce a taxonomy of labels - cytoarchitectonic, macro-anatomic, etc. - and review the issues with Talairach space, Brodmann areas, anatomical connection with function, etc. that are currently clouding the issue of how we should be labeling our activation sites. The connection to normalization is also highlighted.

Bottom line: A great overview of some outstanding issues in localizing activity, and almost as importantly, labeling it.

Supplementary:

Swallow et al. (2003), "Reliability of functional localization using fMRI," NeuroImage 20, 1561-1577 PDF

Summary: Some researchers have made persuasive arguments for using only functionally-defined ROIs in your analysis, but how should you best make them? Swallow et. al examine two key steps in functional ROI generation - normalization (i.e., can you define your ROIs after normalization) and group averaging (i.e., can you define your ROIs at the group level and have them hold for the individual?) Short answers, respectively: normalization is fine, group averaging is not.

Bottom line: Functional ROIs are fine to define on normalized individual data, but not at the group level.

Follow-up:

Arthurs & Boniface (2003), "What aspect of the fMRI BOLD signal best reflects the underlying electrophysiology in human somatosensory cortex?," Clinical Neurophysiology 114, 1203-1209 PDF.

Summary: The authors correlate BOLD signal in some ROIs resulting from electric nerve stimulation with ERPs from the same paradigm, and find that the BOLD signal from the peak voxel of a cluster correlates better with the electrophysiology than the average BOLD signal from the whole cluster does. They have some suggestions about why, including quoting the picturesque turn of phrase, "watering the garden for the sake of a single thirsty flower."

Bottom line: Averaging across a (small) ROI and taking the peak voxel are about the same, but peak voxels might correlate slightly better with the underlying activity.

Friston et al. (2006), "A critique of functional localisers," NeuroImage 30, 1077-1087 DOI.

Summary: Generally argues for the use of factorial designs rather than functional localizer tasks, on the account that they do many of the same things and additionally allow specific tests of interactions. See FunctionalLocalization for a bit more on details of the argument. Also argues for using other measures of signal from the ROI than simple averaging - looking at first eigenvariates, for instance.

Saxe et al. (2006), "Divide and conquer: a defense of functional localizers," NeuroImage 30, 1088-1096 DOI.

Summary: Argues for the use of functional localizers (a response to Friston et al.). Some reasons pass each other in the night a bit, but Saxe et al. also express some ambivalence about factorial designs because a) they may in fact add a bunch of fairly uninteresting comparisons (why add all the extra cells of the design if you don't care about them?) and b) they use the same data to identify regions as to estimate effects (as opposed to different datasets). Off the top of my head, this latter seems like a spurious criticism to me, due to the orthogonality of the design...

Friston & Henson (2006), "Commentary on: Divide and conquer; a defense of functional localisers," NeuroImage 30, 1097-1099 DOI.

Summary: Dammit it, British people spell it with an "s", not a "z"! Oh, there's more, too. They agree with the intuition that main-effect constraints do not, in fact, bias the estimation of other main effects or interactions. Localizer tests statistically correspond to split-half tests, which aren't as efficient as full likelihood ratio tests (of the kind you get when everything's included in the same model). Also, Karl is pissed he got rejecting from PLoS-B.

Bottom line: If your questions of interest will support it (and you'll have useful theories about all the cells), a factorial design is usually better than a separate localizer scan. But separate localizers can be useful for well-characterized context-independent regions with known anatomical-functional mappings - like retinotopic areas of visual cortex, MT, etc. Even if you do a separate localizer, including it in your main model is probably justified and may allow better variance estimates.


CategoryPapers