Some of the researchers under fire from the recent ‘Voodoo Correlations in Social Neuroscience’ article have responded to the accusations of misleading data analysis by suggesting that the accusers have misunderstood the finer points of brain imaging, leading them to falsely infer errors where none exist.
In an academic reply, available online as a pdf, and in an article on the controversy published in this week’s Nature, some of the researchers responsible for the ‘red list’ studies set out their case.
As you might expect, the responses are fairly technical points about statistical analysis in neuroimaging research but are generally well made, suggesting that the accusers don’t fully grasp which measures are related or unrelated, that they don’t account for tests which reduce spurious findings, and that they didn’t ask in sufficient detail about the methods used and so have based their analysis on incomplete information.
However, one in particular seems a little hopeful and relates to a central point made by Vul and his colleagues.
Vul suggested that the correlations shouldn’t exceed the maximum reliability of two measures. As we discussed previously, if you have two measures that are 90% reliable (accurate), on average, you wouldn’t expect correlations higher than 90% because the other 10% of the measurement is likely to be affected by randomness.
However, the response from neuroscientist Mbemba Jabbi and colleagues suggest that this should be based on the maximum reliability ever found.
Vul et al. argue that many of the brain-behavior correlations published in social neuroscience articles are “impossibly high” and that “the highest possible meaningful correlation that could be obtained would be .74″. This categorical claim is based on a statistical upper bound argument which relies on the questionable assumption that “fMRI measures will not often have reliabilities greater than about .7″. However, logically, any theoretical upper bound argument would have to be based on the highest reliability values ever reported for behavioural and fMRI data, respectively (e.g. for fMRI, near-perfect reliabilities of 0.98 have been reported in Fernandez et al. 2003).
I think they’ve caricatured the argument a little bit here. Vul’s point was that most studies suggest an average reliability of .7, therefore, it becomes increasingly unlikely as correlations exceed this limit that they reflect genuine relationships.
It’s not a ‘this is strictly impossible’ argument, it’s a ‘it’s too unlikely to believe’ argument.
However, the majority of ripostes, that Vul and his colleagues have misunderstood the analysis process, are quite a counterpunch to the heavyweight criticisms.
As an aside, there’s an interesting comment from neuroscientist Tania Singer on how the study has been discussed:
“I first heard about this when I got a call from a journalist,” comments neuroscientist Tania Singer of the University of Zurich, Switzerland, whose papers on empathy are listed as examples of bad analytical practice. “I was shocked, this is not the way that scientific discourse should take place.”
Since when? The paper was accepted by a peer-reviewed journal before it was released to the public. The idea that something actually has to appear in print before anyone is allowed to discuss it seems to be a little outdated (in fact, was this ever the case?).
It’s interesting that Vul’s reply essentially makes the counter-claim that the ‘red list’ researchers have misunderstood the analysis process.
This really highlights the point that neuroimaging analysis is not only at the forefront of the understanding of neurophysiology, but also at the forefront of the development of statistical methods.
In other words, the maths ‘aint obvious because the data sets are large, complex, and inter-related in ways we don’t fully understand. We’re still developing methods to make sense of these. This controversy is part of that process.