Nornmalization FAQ

Frequently Asked Questions - Normalization

1. What is normalization?

Inconveniently, brains come in all different sizes and shapes. But the standard statistical algorithms all assume that a given voxel - say 10, 10, 10 - samples exactly the same anatomical spot in every subject. So you'd really like to be able to squash every subject's brain into exactly the same shape and exactly the same space in your image. That's normalization - it's the process by which pictures of subjects' brains are squashed, stretched and squeezed to look like they're about the same shape

2. How does normalization work?

A lot like realignment. Normalization algorithms, just like realignment algorithms, search for transformations of the source image that will minimize the difference between it and the target image - transformations that, as much as possible, will make the source image 'look like' the target template. The difference is that realignment algorithms restrict themselves to rigid-body transformations - moving and turning the brain, but not changing its shape. Normalization algorithms allow nonlinear transformations as well - these actually change the shape of the brain, squeezing and stretching certain parts and not other parts, to make the source brain 'fit' the target brain. Different types of nonlinear transformations can be applied - some use sine/cosine basis functions, some use viscous fluid models or meshes - but all normalization can be thought of this way.

An important point about normalization is that any algorithm, if allowed to make changes on a fine enough scale, can precisely transform one brain into another, exactly. Sometimes, though, that's not what you want - if you're interested in looking at differences of gray matter in children vs. in adults, you'd like to normalize the general anatomy, but not at such a fine scale you remove exactly the difference you're looking for! Other times, though, you'd love to match up every point in a subject's brain exactly with the identical point in another subject's brain. Care should still be taken, though - normalization algorithms can align structural anatomy precisely, but can't guarantee the subjects' functional anatomies will align perfectly.

3. Why would I want to normalize? What are the drawbacks and/or advantages?

The advantage is simple: Brains aren't all the same size and shape. The simplest and most widespread methods of statistical analysis of brain data is to look each voxel across all your subjects and compare some measure in that voxel. For that method to be reasonable, equivalent voxels in each subject's images should match up with equivalent locations in each subject's brain. Since brain structures can be quite variable in size and shape, we need some way to 'line up' those structures. If you want to do any kind of voxel-based statistical analysis - not just of activation, but also of anatomy, as in voxel-based morphometry (VBM) - across a group, normalizing can largely remove a huge source of error from your data by removing variance between brain shapes.

The disadvantage is just as simple: Like any preprocessing, normalization isn't perfect. It generally requires interpolation, which introduces small errors into the images, and even with normalization, anatomies may not line up as well as you'd hope. It can also be slow - depending on the methods and programs used, normalizing a run of functional images can take hours or days. Still, to use voxel-based statistics, it's a necessary evil...

4. When is it unhelpful to normalize?

If you're running an analysis that's not voxel-based - say, one that's based on region-of-interest-specific timecourses - then normalization makes a lot less sense. An alternative to voxel-based methods is to compare some measure activation in particular structures hand-drawn (or automatically drawn) on individual subjects' images. Since a method like this gets summary statistics out from each subject individually, without requiring that any statistical images be laid on top of each other, normalization is totally unnecessary. Some researchers choose to preprocess their data on two parallel paths, one with normalization and one without, using the non-normalized data for region-of-interest analysis and the normalized for traditional voxel-based methods.

As well, several factors can make normalization difficult. Lesions, atrophy, different developmental stages, neurological disorders, and other problems can make standard normalization impossible. Some of these problems can be easily addressed (see Brett et. al in NormalizationPapers), and some can't be. Anyone using patient populations with significant neurological differences from their normalization templates should be advised to explore the literature on normalizing patients before proceeding.

5. How important is it to align images to AC-PC before normalizing?

This varies between programs. For AFNI and BrainVoyager, it's pretty important. The nonlinear transformations can account for non-aligned images in theory, but if you start the images off in a non-aligned state, the algorithm is more likely to get caught in a local minimum of the search space, and give you strange normalization parameters. If you aren't realigning before normalizing, it's best to make sure to examine the normalized brains afterwards to make sure that your normalization ran okay. SPM's normalization algorithm has a realignment phase built in that runs automatically before the nonlinear transformations are examined, so doing realignment beforehand isn't necessary. It can't hurt, particularly when realigning runs of functional data, and it's still wise to examine the normalized image afterwards as a sanity check...

6. How important is it to make sure your segmentation is good before normalizing to the gray template?

Very important. The gray template contains, in theory, only gray-matter voxels. Normalization algorithms find their transformations by trying to minimize the voxel-by-voxel intensity differences between images, and white matter, CSF and gray matter all have notably different intensity profiles. So if you have left-over fringes of white matter or CSF or occasional speckles of white matter included by error in your gray-matter image, they'll be treated as error voxels even if they're in the right place. The algorithm may still converge to the best gray-matter solution, but you can greatly increase your chances of getting a good gray-matter normalization by making sure your segmentation is clean and only includes gray-matter voxels.

7. Should you use the inplane anatomy or the high-res anatomy to determine parameters?

There's not a perfect answer, but probably if you have a high-res anatomy, you should use it. In theory, the high-res anatomy should provide you a better match, because it has more detail. However, if you have significant head movement between the high-res scan and the functionals, there will be an additional source of error in the high-res (even after realignment) that may not be there in the inplane if there's less movement between the inplane and the functionals. In general, though, the increased resolution of the high-res will probably provide better precision for your normalization parameters. In practice, the difference will probably be small, but every little bit helps...

8. When in my analysis stream should I normalize?

There are two obvious points when you can normalize - a) in the individual subject analysis, before you estimate your model / do your stats, b) after you've done your stats and calculated contrast images for each subject, but before you do your group analysis. In case a), you'll normalize all your functional images (usually after estimating parameters from an anatomical image); in case b), you'll normalize only your contrast images (always after estimating parametes from an anatomical image). In general, the standard is a), but I'm not sure exactly why. One problem with b) might be that interpolation errors are being introduced directly into your summary statistics, rather than in the functional images they're derived from. To the extent that contrast images are less smooth than functional images, this will tend to disadvantage b). As well, those interpolation errors are then going to be averaged over far fewer observations in b) - when you're combining only one contrast image for each person - than in a) - when you're often combining several hundred functional images for each person. Not sure whether this will make much difference, though... This is a test that should be run.

9. How can you tell how good your normalization is?

There are possible automated ways you can use to determine quantifiably how close your normalization has gotten - see Salmond et. al, (NormalizationPapers) for one - but, in general, the easiest way to go is just to compare the template image and your normalized image by looking at them side-by-side (or overlaid). Check to make sure that the gross structures and lobes line up reasonably well, and if you have any particular area of interest - hippocampus, V1, etc. - check those in detail to make sure they line up okay. If they don't, you may want to align your source image differently before normalizing, or try normalizing just your gray matter.

9. What’s the difference between linear and nonlinear transformations?

Roughly, linear transformations are those that treat the head as a rigid body, and allow it to be transformed only in ways that don't affect its shape or the shape of anything inside it. Rotations, translations, and scaling all fall in this category. Nonlinear transformations are any transformations that don't respect those constraints; these include any transformations that squeeze parts of the image, stretch parts of the image, or generally distort the shape of the head in any way.

10. How do I normalize children’s brains? How about aging brains?

There is a monster literature out there on normalizing various non-standard brains, too large to survey easily; the Wilke et. al is a good start for children and contains some good citations into that literature, and the Brett et. al contains some similarly nice citations for aging brains (both on NormalizationPapers). Anyone who knows this literature well is invited to contribute links and/or citations...


CategoryFaq