Coregistration FAQ

Frequently Asked Questions - Coregistration

1. What is coregistration?

Remember realignment? It's just like that. It's a way of correcting for motion between images. But coregistration focuses on correcting for motion between your anatomical scans and your functional scans. The slightly trickier thing about that is that your anatomical scans might be T2-weighted, while your functionals are T1-weighted. Or maybe you have a Spoiled Grass anatomical and PET functional images. The intensity-based motion correction algorithms kind of choke on those. So coregistration aims for the same result as realignment - lining up two neuroimages - but uses different strategies to get there.

2. What are the different ways to coregister images?

These days, there are a few, but one de facto standard, which is coregistration by mutual information (MI). In some ways, coregistration is always the same problem - it's just like realignment, but can be between different modalities (PET, MRI, CAT, etc.), and usually you can be slower at it. The problem boils down to finding some function that measures the difference between your two images and then minimizing (or maximizing) it. Minimization/maximization is pretty standard these days; the question is what sort of cost function you use.

In realignment, we just used the sum of the squared difference in intensity between the images, measured voxel-by-voxel. The trouble with these scheme in coregistration is that your images might be different modalities, and hence a tissue type that's very dark in one (say, ventricle in PET) might be very bright in another (say, proton-density-weight MRI). In that case, trying to minimize the intensity differences between the images will give you a horrible registration.

So there are a couple strategies. SPM99 and older used templates within each modality that were already coregistered by hand with each other; that way, you could just realign your images to their modality-specific templates and automatically put them in register. These days, though, almost all automated coregistration schemes (including SPM2) use MI or some derivative of it.

3. What is mutual information, exactly?

Maes et. al, below, give a pretty good summary of it, but here's the nutshell version. If two variables A and B are completely independent, their joint probability Pab (the probability that A comes up a at the same time that B comes up b) is just the product of their respective separate probabilities Pa and Pb; so Pab=Pa*Pb, if A and B are independent. On the other hand, if A and B are completely dependent - that is, knowing what value A takes tells you exactly what value B will have - then their joint probability is exactly the same as their respective separate probabilities; so Pab = Pa = Pb, if A and B are completely dependent. If A and B are dependent a little bit, but not entirely, their joint probability will be somewhere in between there - knowing what value A has tells you a little bit about B, so you can make a good informed guess at what B will be, but not know it exactly.

Mutual information is a way of measuring to what extent A and B are dependent on each other. Essentially, if you can estimate the true joint probability Pab for all a and b, and you know the individual probability distributions Pa and Pb, you can measure how far away the probability distribution Pab is from Pa*Pb, with a Kullback-Leibler statistic that measures the distance between curves. If Pab is much different from Pa*Pb, then you know A and B are dependent to some degree.

Alternatively, you can frame MI in terms of uncertainty; MI is the reduction in uncertainty about B you get by looking at A. If you're much more certain about A after looking at B, then A and B have high MI; they're quite dependent. If you don't know anything more about A after looking at B, then they have low MI and are pretty independent.

4. So how does mutual information help coregistration?

MI coregistration methods work by considering the intensity in one image to be A and the intensity in the other image to be B. The algorithm computes the MI between those two variables - finds the mutual information between the intensity in one image and the intensity in the other - and then attempts to maximize it.

The idea is that, instead of squared-intensity-difference methods which assume that a bright voxel in one image must be bright in another, you let the images themselves tell you how they're related. If by looking at a bright voxel in one image, though, tells you almost infallibly that the corresponding voxel in the other image is dark, then the images have very high MI, and they're probably close to registered. You can leave unspecified the relationship between intensities in the two modalities, and let the algorithm figure out how they're related - it automatically maximizes whatever relationship they have. This makes MI ideal for coregistering a wide variety of medical images.

5. How are coregistration and segmentation related?

Fischl et. al (SegmentationPapers) make the point that the two processes operate on different sides of the same coin - each one can solve the other. With a perfect coregistration algorithm, you could be maximally confident that you could line up a huge number of brains and create a perfect probability atlas - allowing you the best possible prior probabilities with which to do your segmentation. In order to do a good segmentation, then, you need a good coregistration. But if you had a perfect segmentation, you could vastly improve your coregistration algorithm, because you could coregister each tissue type separately and greatly improve the sharpness of the edges of your image, which increases mutual information.

Fortunately, MI thus far appears to do a pretty good job with coregistration even in unsegmented images, breaking us out of a chicken-and-egg loop. But future research on each of these processes will probably include, to a greater and greater extent, the other process as well.

Check out SegmentationFaq for more info on segmentation...


CategoryFaq