Connectivity FAQ

Frequently Asked Questions - Connectivity

1. What is functional connectivity? What is effective connectivity?

The concept of "brain connectivity" is, as Horwitz points out on ConnectivityPapers, rather a tricky one to define. Ideally, you'd like to be able to measure the spatial (and temporal) path that information follows, from one point to another, millisecond by millisecond, and neuron to neuron (or at least region to region), in a directed fashion, such that you could say, "Ah, yes, activation starts in the visual cortex, moves to V2, gets shuttled from there to these other three visual areas and parietal cortex, and from parietal there to this other bit." Then you'd know something about what was being calculated and what calculations were being done where (and when). But, of course, you can't do that (yet). In fact, in general, most neuroscience recording methods, be they single-cell recording or fMRI or anything else, deal with isolated units of analysis - single cells or single voxels. You can't, in general, really well measure one neuron's connection to another in a living, behaving animal, much less noninvasively in a person.

What you can do is sample several sites at once and try and see how the patterns of activity you get are connected to each other. An obvious pattern to look for would be if two sites/voxels have intensities that are highly correlated. If the timeseries from one voxel looks exactly like the timeseries from another voxel, it might be a good bet they're doing similar things. If they're right next to each other, you call it a cluster; if they're far away from each - say, in visual cortex and PFC - you might guess they're connected to each other somehow.

Trouble is, of course, you run into the old adage that correlation doesn't imply causation. High correlation between remote sampling sites might imply some direct connection, or it might imply some third site driving their joint activation, or it might imply them jointly driving some third site. And even if they are connected, it's difficult to tell the direction of the connection, even if there is a "direction" to it.

Hence the two different terms used to describe connectivity in neuroimaging, a split introduced by Friston in 1993. Functional connectivity is the correlation concept - it's a descriptive concept, simply defined as the temporal correlation between remote timeseries or samples or what have you. Finding functional connectivity essentially reduces, as Lee et. al point out, to finding whether activity in two regions share any mutual information or not. Effective connectivity, by contrast, is the causation concept. It's defined as "the influence one neural system exerts over another either directly or indirectly." It doesn't imply a direct physical connection - simply a causative influence. It's a concept meant to support explanation and inference, more than just description, and it requires some account of causative direction or why there isn't any. It's also a lot trickier to figure out, generally, than functional connectivity. You'll hear both terms tossed around a fair amount, but remember: functional is simply correlation, whereas effective requires some causation somewhere.

2. How do I measure connectivity in the brain?

Good question. Almost every method for functional neuroimaging has ways to measure connectivity, and almost all of them boil down to the same concept: measuring the connection between timeseries at different points in the brain. In other words, almost every method out there measures functional connectivity, rather than directly measuring effective connectivity. Whether you're doing EEG and correlating timeseries from different electrodes, or using the fanciest dynamic causal modeling mathematical strategy with fast-TR fMRI, you're restricted generally to the data you can measure, which are samples from voxels that are treated independently. You can rule out some possible directions of influence by rules like temporal precedence (if a spike in one area precedes one from another area, the latter area can't have caused the earlier spike), but in general, most connectivity measures work on this simple foundation: Sample timeseries of activity from many different areas (voxels, electrodes, etc.) and then mathematically derive some measure of the mutual information between selected timeseries.

There are a couple obvious exceptions to this foundation. Measuring anatomical connectivity is a different type of procedure, and it's not clear how much influence anatomical connectivity (as we can measure it) and functional connecitivity have with each other - or should have with each other in analysis. Lee et. al (ConnectivityPapers) has an intriguing discussion on this point (and many others). At some level, of course, if we're interested in finding out whether information is flowing from one neuron to another, it's useful to know if they're directly connected or not. But we're a long way off from those sorts of measures on a large scale, and it's not clear that coarser measures are all that useful in learning about functional connectivity. Anatomical connectivity on its own can be incredibly interesting, though, which is why diffusion tensor imaging (DTI) is becoming increasingly popular as an imaging modality. The idea of DTI is that it can extract a measure of directionality of the white-matter tracts in a given voxel, giving you a picture of where white matter is pointing in the brain. This can be used to infer which areas are strongly connected to each other and which less so. And, of course, many older techniques for measuring connectivity in animals - staining, tracing, etc. - are still widely used.

The other big exception to the rule of measuring correlation is techniques that can directly measure causality by disrupting some part of the system. If you can knock out part of the system and cause a part hypothesized to depend on it to fail, while knocking out the latter part doesn't affect the former, you can start to make some inferences about directionality of influence. In living humans, the latest way to do this is with transcranial magnetic stimulation (TMS), which seems to offer some ways to disrupt selected cortical areas temporarily, reversibly and on command. Although the technique is relatively new, it holds high promise as an additional tool in the connectivity toolbox. Other methods for disruption - cortical cooling, induced lesions in animals, even lesion case studies in humans - can provide valuable information on this front as well.

3. What are the different methods to analyze connectivity in fMRI? How do they differ from each other?

In a field burdened with a heavy load of meaningless acronyms and technical jargon, connectivity analyses stand out as a particular offender. There are what seems like a dizzying array of ways to model connectivity in fMRI, each with various acronyms and fancy-sounding concepts and a great number of equations underlying it. The important thing to remember in all of them is that the data input is essentially the same: it's just timeseries data. And the underlying computations are all essentially doing the same thing - looking for patterns in the data that are similar between regions. Some methods literally attempt to do the same thing, but for the most part, the different methods proliferate because they examine slightly different aspects of connectivity. So the important things to think about when faced with interpreting or performing any connectivity analyses are goals: what is the point of this analysis? What does its output measure? What are the alternate possible explanations that this analysis has ruled out, or failed to rule out?

There's a broad distinction you can make in connectivity analyses between model-driven and non-model-driven analyses. Non-model-driven analyses are those which don't "know" anything about the details of your experiment - some of them are called "blind" algorithms because they're searching for patterns in your data without knowing what the structure of your experiment was. These types of analyses can be used to get at activation in general, but they're probably more widely used in connectivity analyses. Principal components analysis (PCA) and independent component analysis (ICA) are non-model-driven types of analyses. I won't talk much about them yet here, because I don't know much about them right now. Anyone else out there, feel free to contribute some info...

The popular forms of model-driven analysis are those embedded into the popular neuroimaging programs, and SPM2 has recently added a couple to the field which are getting wide use. Psychophysiological interactions (PPIs) have been used in SPM and other programs for a while, but SPM2 has automated this analysis to make it a lot easier to perform. They start with a seed ROI and look for other regions that have high changes in connection strength to the seed as the experiment proceeds. Dynamic causal modeling (DCM) is Karl Friston's latest addition to the modeling tradition, and it's a much more all-encompassing form of connectivity analysis; he claims it subsumes all earlier forms of analysis as well as the standard general linear model activation analysis. DCM starts with a set of ROIs and attempts to determine the influence of each on the other and the experiment's influence on the connectino strengths. BrainVoyager is soon to release a connectivity package based on the concept of autoregressive modeling and Granger causality - a way of ruling out some directions of causality. This isn't out yet, due to a patent dispute, so I don't know much about it. Structural equation modeling (SEM) is used when you have a set of ROIs you'd like to investigate, but aren't sure what the links between them may be; it's a way of searching among the possible graphs that connect your ROIs and ruling out some connections while including others. Which of these you decide to use will decide on your experimental goals and what you want this analysis to show, exactly.

4. What is a psychophysiological interaction (PPI) analysis? How do I do it? Why would I want to?

A PPI analysis starts with an ROI and a design matrix. It's a way of searching among all other voxels in the brain (outside the seed ROI) for regions that are highly connected to that seed. One of the most straightforward ways of doing connectivity analyses would be to start with one ROI and simply measure the correlation of all other voxels in the brain to that voxel's timeseries, looking for high correlation values. As Friston and other pointed out a while ago, though, it's not quite as interesting if the correlation between two regions is totally static across the experiment - or if it's driven by the fact that they're both totally non-active during rest conditions, say. What might be more interesting is if the connection strength between a voxel and your seed ROI varied with the experiment - i.e., there was a much tighter connection during condition A between these regions than there was during condition B. That may tell you something about how connectivity influences your actual task (and vice versa).

PPIs are relatively simple to perform; you extract the timeseries from a seed voxel or ROI and convolve it with a vector representing a contrast in your design matrix (say, A vs. B). You then put this new PPI regressor into a general linear model analysis, along with the timeseries itself and the vector representing your contrast; you'll use those to soak up the variance from the main effects, which you'll ignore in favor of the PPI interaction term. When you estimate the parameters of this new GLM, the voxels where the PPI regressor has a very high parameter are those who showed a signficant change in connectivity with your experimental manipulation.

This is do-able in SPM99, or indeed any program; SPM2 makes it more automated, and adds some mathematical wrinkles, like deconvolving the HRF from your PPI regressor so as to look for interactions at the deconvolved (and hopefully neural) level, rather than at the HRF level.

PPIs are good to do if you have one ROI of interest and want to see what's connected with it. They're tricky to interpret, and they can take a really long time to re-estimate if you have several ROIs to explore and many subjects.

5. What is structural equation modeling (SEM)? How do I do it? Why would I want to?

Structural equation modeling analyses begin with a set of ROIs and nothing else. The idea in SEM is to try and estimate connection strengths between those ROIs that make up the best possible model of connection between them. The connection strengths are correlational (not directional), but represent the straightforward degree of correlation between the timeseries of those regions. This strategy (and variants of it) also fall under the title "path analysis," although that's a broader term that can describe analyses of non-timeseries data. SEM procedures can vary, but they're all kind of like the GLM: they search through the space of possible connection strengths until they find the set of connection strengths that best fits the data.

The measure of "best fit" is an important choice in SEM, and there's not wide agreement on the measure you should use, except a common suggestion that you use more than one and combine their results. Other bells and whistles on SEM analyses can include bootstrapping the data (see PthresholdFaq for information on permutation tests and bootstrapping) to get a confidence interval on how good the model could possibly be (Bullmore et. al (2000), NeuroImage 11, describe this strategy).

SEM isn't built in to any of the major neuroimaging programs that I know of, but several statistics program support it (as it's used in other social sciences besides neuroscience).

SEM is good to do when you have a set of ROIs - either functional or anatomical - and you're interested in knowing how strong the connections are between them (or whether connections between a particular pair exist at all) across the whole experiment (or part of it). It's a pretty straightforward style of analysis, but because of that, it doesn't take into account of lot of details of fMRI - temporal variations in the connection strengths, for example.

6. What is Granger causality? How does it relate to brain connectivity?

Granger causality is a concept imported from economics, where it was developed to do timeseries modeling of economic data (weird that that's the kind of data economists would want to look at - economic data, you know. Strange guys, those economists). It's an attempt to impose some directionality on connections between timeseries, or at least rule out some directions, by leveraging the rule of temporal precedence. The core of the Granger causality idea is that events can't cause events that already happened - so if a particular pattern happens in one timeseries, and then happens later in another timeseries, the latter one can't have caused the former one. Granger causation is a very limited form of causality, because it doesn't rule out the possibility that some third factor has induced the change in both of the timeseries, or any of the other problems common to ascribing causality to correlation data, but it's a start in the direction of blocking off certain directions of arrows.

The most explicit use of Granger causality has been in the connectivity package being developed for BrainVoyager, detailed below in Goebel et. al (ConnectivityPapers). The package is based also on the use of vector autoregressive modeling, which I couldn't begin to explain in detail but I gather is kind of like dynamic causal modeling or something like that. Unfortunately, the package has been held up in patent disputes, so it's not clear when we'll get to evaluate it up front. I'm not aware of other packages or programs currently using those methods to evaluate connectivity.

7. What is Dynamic Causal Modeling (DCM)? How do I do it? Why would I want to?

Well, this is another one that I'm undoubtedly going to botch the explanation for. But I'll take a very limited stab at it. If you're interested in this analysis, I highly recommend reading Friston et. al's paper on it at ConnectivityPapers.

DCM analyses are highly model-driven. You start with a set of ROIs and a guess at how they're connected with each other. That guess can be "fully connected," with every ROI attached to every other, or you can eliminate some connections off the bat. DCM then takes as its input your design matrix and the timeseries from those regions, and attempts a sort of hyper-advanced general linear model estimation. Instead of a general linear model, though, DCM explicitly considers some nonlinear aspects to the experiment: specifically, the connections between your ROIs and how they might change with the experimental manipulation. It goes through a huge set of Bayesian estimations and deconvolutions and every other fancy thing you can think of, and what you get on the way out is a big set of parameters. That set will include: HRFs for each of your regions, "resting" connection strengths between each of your regions, beta weights describing how the experiment affected each of your regions (just like regular beta weights), and "connection beta weights," indicating how the experimental manipulation affected your connection strengths. It'll also spit out some estimation of the statistical significance of each of these.

Friston et. al are hyped on this analysis; they believe that all the other analyses out there (SEM, PPI, etc.) are all special cases of DCM. Even the standard general linear model analysis of activation is a special case, they say, where you're assuming there are no connections between ROIs, and your ROIs are your voxels. A few papers have been put out thus far - Mechelli et. al (below) is one - using DCM in big analyses, with fairly promising results.

DCM is built into SPM2, and requires you to have SPM2 results to use it. It's not available yet for any other neuroimaging program.

DCM is great if you've got a set of ROIs, a hypothesis about how they might work, and you're particularly interested in how some areas or conditions might influence the connections between some other areas. Mechelli et. al (ConnectivityPapers) is a good example of this - they looked at whether differences in visual activations due to categories of stimuli were mediated from the bottom up or from the top down. It's also kind of insanely complicated right now, and clearly in a sort of feeling-out phase in the community. Results may be difficult to interpret. But it's definitely the cutting edge of fMRI connectivity research for model-driven analyses right now.

8. How do I measure connectivity across a group?

Almost all of these methods measure correlations between timeseries, and so they're only appropriate to do at the individual level. The best way to run a group analysis is in the standard hierarchical fashion - take the output of the individual analysis and toss it into a group analysis. The output from all of them won't be the same - for PPIs, for example, you'll get an activation image, which works in SPM for a standard group-level analysis, whereas for SEM you'll get a set of connection weights, which you can then run a standard statistical test on in SPSS - but the hierarchical approach should work fine in general.


CategoryFaq