Realignment FAQ

Frequently Asked Questions - Realignment

1. What is realignment / motion correction?

In a perfect world, subjects would lie perfectly still in the scanner while you experimented on them. But, of course, in a perfect world, I'd be six foot ten with a killer JumpHook, and my car would have those hubcaps that spin independently of the wheels. Sadly, I only have three of my original Camry hubcaps, and subjects are too darned alive to hold perfectly still. When your subject moves in the scanner, no matter how much, a couple things happen:

  • Your voxels don't move with them. So the center of your voxel 10, 10, 10, say, used to be just on the front edge of your subject's brain, but now they've moved a millimeter backwards - and that same voxel now sampling from outside the brain. If you don't correct for that, you're going to get an blurry-looking brain when you average your functional effect over time. Think of the scanner as a camera taking a really long exposure - if your subject moves during the exposure, she'll look blurry.
  • Their movement generates tiny inhomogeneities in the magnetic field. You've carefully prepared your magnetic field to be perfectly smooth everywhere in the scanner - so long as the subject is in a certain position. When she moves, she generates tiny "ripples" in the field that can cause transient artifacts. She also subtly changes the magnetic field for the rest of the experiment; if she gradually moves forward, for example, you may see the back of her brain get gradually brighter over the experiment as she changes the field and moves through it. If you don't account for that, it'll look like the back of her brain is getting gradually more and more active.

    Realignment (also called motion correction - they're the same thing) mainly aims to help correct the first of these problems. Motion correction algorithms look across your images and try to "line up" each functional image with the one before it, so your voxels always sample the same location and you don't get blurring. The second problem is a little trickier - see the question below on including movement parameters in your design matrix, as well as the e-mails in RealignmentPapers for further discussion. This issue also comes up in correcting for physiological movement, so check out PhysiologyFaq as well.

2. How do the major programs' algorithms work? How do they perform relative to each other?

SPM, AFNI, BrainVoyager, AIR 3.0, and most other major programs, all essentially use modifications of the same algorithm, which is the minimization of a least-squares cost function. The algorithm attempts to find the rigid-body movement parameters for each image that minimizes the voxel-by-voxel intensity difference from the reference image.

The particular implementation of the algorithm varies widely between programs, though. Ardekani et. al (below) does a detailed performance breakdown of SPM99 vs. AFNI98 vs. AIR 3.0 vs. TRU. AFNI is by far the fastest and also the most accurate at lower SNRs; SPM99, though slower, is the most accurate at higher SNRs. See below for more detail...

3. How much movement is too much movement?

Tough to give an exact answer, but Ardekani et. al find that SPM and AFNI can handle up to 10mm initial misalignment (summed across x/y/z dimensions) without significant trouble. Movement in any single dimension greater than the size of a single voxel between to images is probably worth looking at, and several images with greater than one-voxel motion in one run is a good guideline for concern.

4. How should you correct motion outliers? When should you just throw them out?

Attempting to correct extreme outliers is a tricky process. On the one hand, images with extremely distorted intensities due to motion can have a measurable distortion effect on your results. On the other hand, distinguishing intensity changes due to motion as opposed to task-related signal is by no means an easy process, and removing task-related signal can also measurably worsen your results (relative to both Type I and II errors).

Our current thinking in the lab is that outlier correction should be attempted only when you can find isolated scans who show significantly distorted global intensities (several standard deviations away from the mean) that are with a TR or two of a significant head movement. A significant head movement without a global intensity change is probably handled best by the realignment process itself; a significant intensity change without head motion may have nothing to do with motion. The artdetect (Global Variate button) script is designed to do this for SPM data; see ArtifactDetection for step-by-step instructions, which may be useful also for other programs that have easy ways to view intensities and motion parameters.

Another option is to simply censor (i.e., not use) the images identified as iffy; this is easier in AFNI than in SPM. This has the disadvantage of possibly distorting your trial balancing in a given session if whole trials are removed, as well losing whatever task signal there may be in that scan. It has the advantage of being more statistically valid - outlier correction with interpolation obviously introduces new temporal correlation into your timeseries.

Several things might make a particular session entirely unusable: several isolated scans with head motion of greater than 10mm (summed across x/y/z); several scans with head motion in a single direction greater than the size of a single voxels; a run of several scans in a row with significant motion and significant intensity change; high correlation of your motion parameters with your task (see below). All subjects should be vetted for these problems before their results are taken seriously...

5. How can you tell if it's working? Not working?

Realignment in general is pretty robust; the least-squares algorithm will always produce some solution. It may, however, get caught in a non-optimal solution, particularly with scans that have a lot of motion and/or a big intensity change from the one before. It's difficult to evaluate realignment's effects post hoc just by looking at your results; the best way to make sure it's worked is visual inspection. SPM's "Check Reg" button will allow you to simultaneously display up to 15 images at once, side-by-side with a crosshair placed identically in each of them, to make sure a given voxel location lines up among several images. You may want to look particularly at scans you've identified with significant head motion, as well as comparing the first and last images in your run...

6. Should I include movement parameters in my design matrix? Why or why not?

The e-mail threads on RealignmentPapers are the best examination of that issue from the people who know. In a nutshell, even after realignment, various effects like interpolation errors, changes in the shim from head movement, spin history effects, etc. can induce motion-correlated intensity changes in your data. Including your motion parameters in your design matrix can do a very good job of removing these changes; including values derived from these parameters, such as their temporal derivative, or sines of their values (see Grootonk et. al at RealignmentPapers) can do an even better job.

The reason not to include these parameters is that there's a pretty good chance you also have actual task-related signal that happens to correlate with your motion parameters, and so removing all intensity changes correlated with motion may also significantly decrease your sensitivity to task-related signal. Before including these parameters in your matrix, you're probably wise to check how much your motion correlates with your task to make sure you're not inadvertantly removing the signal you want to detect.

7. What is 'unwarping'? Why is it included as a realignment option in SPM2? And when should I use it?

The follow-up e-mail thread on RealignmentPapers is a good overview of the issue - thanks to Trey Hedden for bringing this issue up. Head motion can cause artifacts for a variety of reasons - gradual changes in the shim, changes in dropout, changes in slice interpolations, spin-history effects - but certainly one of the big ones is motion-by-susceptibility interactions. In areas typically vulnerable to susceptibility-induced artifacts - artifacts caused by magnetic field distortion due to air-tissue interfaces - head motion can cause majors changes in those susceptibility regions, and intensities around the interface can change in unpredictable ways as the head moves. The guys at SPM describe it as being like a funhouse mirror near the interface - there's usually some spot of blackout (signal dropout) where susceptibility is really bad, but right around it, the image is distorted in weird ways, and sliding around can change those distortions in unpredictable ways.

Motion-by-susceptbility interaction is certainly one of the biggest sources of motion-related artifact, and some people think it's THE biggest, and so the "unwarp" utility in SPM2 is an attempt to specifically address it. Even if you get the head lined up image-to-image (realignment), these effects will remain, and so you can try and remove them additionally (unwarping). This is essentially a slightly less powerful but very much more specific version of including your motion parameters in the design matrix - you'll avoid almost all the problems about task-correlated motion that go with including your motion parameters as correlates, but (hopefully) you'll get almost all the same good effects. The benefits will be particularly noticeable in high-susceptibility regions (amygdala, OFC, MTL).

One BIG caveat about unwarping, though - as it's currently implemented, I believe it's ONLY usable for EPI images, NOT for spiral. So if you use spiral images, you shouldn't use this. But if you use EPI, it can be worth a try, particularly if you're looking at high-susceptibility regions. Check the follow-up e-mail thread and paper for more info.

8. What's the best base image to realign to? Is there any difference between realigning to an image at the beginning of the run and one in the middle of the run?

Not a huge difference, if any. AFNI has a program (findmindiff, I think) that identifies the image in a particular series that has the least difference from all the other images, which would be the ideal one to use. In practice, though, there's probably no significant difference between using that and simply realigning to the first image of the run, unless you have very large (10mm+) movement over the course of the run, in which case the session is probably of questionable use as well...

9. When is realigning a bad idea?

The trouble with the least-squares algorithm that realignment programs use is that it's easily fooled into thinking differences in intensity between images are all due to motion. If those differences are due to something else - task-related signal, or sudden global intensity changes - the realignment procedure can be fooled and come up with a bad realignment. If the realignment is particularly bad, it can completely obscure your signal, or (arguably worse) generate false activations! This is most pressing in the case of task-correlated motion (see below for discussion), but if you have significant global intensity shifts during your session that aren't motion-related, your realignment will probably introduce - rather than remove - error into your experiment. There are other realignment methods you can use to get around this, but they're slow. See Friere & Mangin on RealignmentPapers, and the CoregistrationFaq page.

10. What can I do about task-correlated motion? What's the problem with it?

See Bullmore et. al, Field et. al, and Friere & Mangin in RealignmentPapers for more details about this issue. The basic problem stems from the fact that head motion doesn't just rotate and shift the head in an intensity-invariant fashion. Head motion actually changes the image intensities, due to inhomogeneity in the magnetic field, changes in susceptibility, spin history effects, etc. If your subject's head motions are highly correlated with your task onsets or offsets, it can be impossible to how much of a given intensity change is due to head motion and how much is due to actual brain activation. The effect is that task-correlated motion can induce signficant false activations in your data. Including your motion parameters in your design matrix in this case, to try and account for these intensity changes, will hurt you the other way - you'll end up removing task-correlated signal as well as motion and miss real activations.

The extent of the problem can be significant. Field et. al, using a physical phantom (which doesn't have brain activations) were able to generate realistic-looking clusters of "activation," sometimes of 100+ voxels, with head movements of less than 1mm, simply by making the phantom movements increasingly correlated with their task design. Bullmore et. al point out that patient groups frequently have significant differences in how much task-correlated motion they have relative to normals, which can significantly bias between-group comparisons of activation.

Even worse, the least-squares algorithm commonly used to realign function images is biased by brain activations, because it assumes intensity changes are due to motion and attempts to correct for them. As Friere & Mangin point out, even if there's no motion at all, the presence of significant activations could cause the realignment to report motion parameters that have task-correlated motion!

So what can you do? First and foremost, you should always evaluate how correlated your subjects' motion is with your task - the parameters themselves and linear combinations of them. (An F-test can do this - we'll have a script available for this in the lab shortly.) The correlation of your parameters with your task is hugely more important than the size of your motion in generating false activations. Field demonstrated false activation clusters with correlations above r = 0.52. If your subject exhibits very high correlation, there's not much you can do - they're probably not usable, or at least their results should be taken with a grain of salt. There are some techniques (see Birn et. al, below) that may help distinguish activations from real motion, but they're not perfect...

Bullmore et. al, below, report some ways to account for task-correlated motion that may be useful.

Even without any task-correlated motion, though, you should be aware your motion parameters may be biased, as above, towards reporting a correlation. This is not usually a problem with relatively small activations; it may be bigger with very large signal changes. You can avoid the problem entirely by using a different realignment algorithm - based on mutual information, like the algorithms here (CoregistrationFaq) - but these are impossibly slow, and not practically usable.

Among the usable algorithms, Morgan et. al reported SPM99 was the best at avoiding false-positive voxels... Keep an eye out for more robust algorithms coming in the future, though... And you may want to try and use one of the prospective motion correction algorithms, as described in Ward et. al. at RealignmentPapers.