Frequently Asked Questions - Contrasts
1. What's the difference between a T- and an F-contrast? When should I use each one?
Simply put, a T-contrast tests a single linear constraint on your model - something like "The effect size (parameter weight) for condition A is greater than that for condition B." T-contrasts can involve more than two parameters, but they can only ever test a single sort of proposition. So a T-contrast can test "The sum of parameters A and B is greater than that for parameters C and D," but not any sort of AND-ing or OR-ing of propositions.
An F-contrast, by contrast (ha!), is used to test whether any of several linear constraints is true. An F-contrast can be thought of as an OR statement containing several T-contrasts, such that if any of the T-contrasts that make it up are true, the F-contrast is true. So you could specify an F-contrast like "parameter A is different than B; parameter C is different than D; parameter E is different than F," and if any of those linear contrasts were significant, the F-contrast would be significant. The utility of the F-contrast is highest when you're just trying to detect areas with any sort of activation, and you don't have a clear idea as to the shape of the response. They were designed to be used with something like a Fourier basis set model, where you want to know if any combination of your cosine basis functions is significantly correlated with the brain activation. Testing that set with a T-contrast wouldn't be correct; it would tell you whether the sum of those basis functions' parameters was significant, which isn't what you'd want. Testing individually whether any of those parameters is significant, though, tells you something.
The disadvantage of the F-test is that it doesn't tell you anything about which parameters are driving the effect - that is, which of the linear constraints might be individually significant. It also doesn't tell you what the direction of the effect; parameter A might be different than parameter B, but you don't know which one is greater. This isn't a problem if you're using a basis set where different parameters don't have much individual physiological meaning (such as a Fourier set), but oftentimes F-tests are followed up with t-tests to further isolate which parameters are driving the effect and what direction the effect is in.
The Ward, Veltman & Hutton, and Friston papers on ContrastsPapers both describe the F-test and how it's used in pretty clear fashion, with specific examples.
2. What's a conjunction analysis? How do I do one?
An F-test allows you to OR together several linear constraints, but what if you want to AND them together? That is, what if you want to test if all of a set of several linear constraints are satisfied? For that, you need a conjunction analysis. There are several ways to perform them - see the Price & Friston paper on ContrastsPapers and those below it - but SPM provides a built-in way that is a good example. (Details of how to use SPM to do one are in the Veltman & Hutton paper there). The idea is to find the intersection of all the sets of voxels that satisfy a given linear constraint in the set, a simple mathematical operation in itself. The tricky part is to figure out what threshold level to use on each individual linear constraint to give the conjunction (or intersection) an appropriate p-threshold. SPM makes the choice that the p-thresholds on each individual constraint simply multiply together, so a conjunction of two constraints that you wanted to threshold at 0.001 would mean thresholding each individual constraint at the square root of 0.001. The resulting field of t-statistics is called a "minimum T-field" - effectively you're thresholding the smallest T-statistic among the linear constraints at each voxel - and SPM allows corrected p-thresholds to applied as well as uncorrected. These analyses are also available for F-constrasts, to AND together several OR statements.
One problem that some critics of this approach have highlighted is that it means at a voxel called "active" in the conjunction, any individual constraint on it may hardly be significant at all. If you want to see the conjunction of contrasts A and B, you'd prefer not to see 'common activations' that have p-values far above a reasonable threshold when looked at in each individual contrast. Price & Friston have argued that the individual constraints don't matter much in conjunctions, but some people still prefer not to use the minimum T-field approach for this reason. In this case, you can conjoin constraints together simply by intersecting their thresholded statistic maps (with some care taken to make sure the contrasts are orthogonalized (see below)), which can be done algebraically.
3. What does 'orthogonalizing' my contrast mean?
If you're testing a conjunction, one worry you might have is the the contrasts that make it up don't have independent distributions - that they are testing, to some degree, the same effect - and thus the calculation of how significant the conjunction of will be biased. If you use SPM to make a conjunction analysis through the contrast manager, it will attempt to avoid this problem by orthogonalizing your contrasts - essentially, rendering them independent of one another. The computation involved is complicated - not just simply checking whether the contrast vectors are linearly independent, although it's derived from that - but it can be thought of as follows:
Starting with the second contrast, check it against the first for independence; if the two are not orthogonal, remove all the effects of the first one from the second, creating a new, fully orthogonal contrast. Then check the third one against the second and the first, the fourth against the first three, and so on. SPM thus successively orthogonalizes the contrasts such that the conjunction is tested for correctly. See the help docs for spm_getSPM.m for more details.
4. How do I do a multisubject conjunction analysis?
Friston et. al (ContrastsPapers) is a good paper to check out for this. They describe some ways of thinking about the SPM style of conjunction analysis, which is normally a fixed-effects and hence only single-subject analysis, that allow its extension to a population-level inference. It's not clear that all the assumptions in that paper are true, and so it's on a little shaky ground.
However, it's certainly possible at an algebraic level to intersect thresholded t-maps from several subjects, just as easily as it is from several constraints. So it may make sense to try the simple intersection method, using somewhat loosened thresholds on the individual level. I'm not super sure on all the math behind this, so you might want to talk to Sue Gabrieli about this sort of thing...
5. What does the 'effects of interest' contrast image in SPM tell you?
Not an awful lot of interest, as it turns out. It's an image automatically created as the first contrast in an SPM analysis, and it consists of a giant F-contrast that tests to see whether any parameter corresponding to any condition is different from zero. In other words, if any of the columns of your design matrix (that aren't the block-effect columns) differ significantly from zero, either positively or negatively, at any voxel, that voxel will show up as significant in this F-image. Needless to say, it's not a very intepretable image for anyone who isn't using a very simple implicit-baseline design matrix. So generally, don't worry about it.
6. How is the intercept in the GLM represented in the analysis?
Every neuroimaging program accounts for the "whole-brain mean" somehow in its statistics, by which I mean whatever part of the signal that does not vary at all with time. That time-invariant point can be represented in the design matrix explicitly as a column of all ones, and SPM automatically includes a column like that for each session in a given design matrix. (AFNI and BrainVoyager don't explicitly show this column in the design matrix, but they include it in their model in the same fashion.) During the model estimation, a parameter is fit at each voxel to this whole-experiment mean, as any other column of the design matrix, and its value represents the mean signal value around which the signal oscillates. This is the 'intercept' of the analysis - the starting value from which experimental manipulations cause deviations. This number is automatically saved at each voxel in SPM ( in the beta images corresponding to the block effect columns) and can be saved in AFNI or BrainVoyager if desired.
7. How do I make contrasts for a deconvolution analysis? What sort of contrasts should I report?
Generally, deconvolution analyses of the sort implemented by AFNI's 3dDeconvolve work on a finite impulse response (FIR) model, in which each peristimulus timepoint for each condition out to a threshold timepoint is represented by a separate column in the design matrix. In this case, a given 'condition' (or trial type) is represented in the matrix not by one column but by several. The readout of the parameter values across those peristimulus timepoints then gives you a nice peristimulus timecourse, but how do you evaluate that timecourse within the GLM statistical framework? There are a couple of ways; in general, the Ward (ContrastsPapers) is the best reference to describe them.
A couple obvious ones, though. First, an F-contrast containing a single constraint for each column of a given condition will test the 'omnibus' hypothesis for that condition - the hypothesis that there's some parameter significantly different from zero somewhere in the peristimulus timecourse, or more, simply, the hypothesis that there was some brain signal correlated to the task at some point following the task onset. This test won't tell you what sort of activity it was, but it will point out areas that had some sort of activity of some kind going on.
Secondly, a variety of different T-contrasts could be used to test various hypotheses about the timecourse. You might be interested in testing between two conditions at the same timepoint that you think might be the peak of the HRF. You might be interested in whether a single condition's HRF rose more sharply or fell more sharply (in which case a T-contrast within that timecourse could be used). You might use some sort of a summing T-contrast to compare the 'area below the curve' in two different conditions.
There's not wide consensus about exactly what sorts of statistics count as 'significant' activation at this point in the literature - the difference between an HRF that rises sharply, spikes high, then falls back down to baseline quickly from an HRF that rises slowly, peaks only a little above baseline, but stays above baseline for a long time, isn't real clear at this point. No one is sure what such a difference represents exactly. This means, though, that there are a wealth of differences between timecourses that one could potentially explore. Almost any hypothesis can be made interesting with the right explanation, and fortunately almost any hypothesis can be tested in the GLM with the tools of T-tests, F-tests and conjunctions of constraints.