Some of the researchers under fire from the recent ‘Voodoo Correlations in Social Neuroscience’ article have responded to the accusations of misleading data analysis by suggesting that the accusers have misunderstood the finer points of brain imaging, leading them to falsely infer errors where none exist.
In an academic reply, available online as a pdf, and in an article on the controversy published in this week’s Nature, some of the researchers responsible for the ‘red list’ studies set out their case.
As you might expect, the responses are fairly technical points about statistical analysis in neuroimaging research but are generally well made, suggesting that the accusers don’t fully grasp which measures are related or unrelated, that they don’t account for tests which reduce spurious findings, and that they didn’t ask in sufficient detail about the methods used and so have based their analysis on incomplete information.
However, one in particular seems a little hopeful and relates to a central point made by Vul and his colleagues.
Vul suggested that the correlations shouldn’t exceed the maximum reliability of two measures. As we discussed previously, if you have two measures that are 90% reliable (accurate), on average, you wouldn’t expect correlations higher than 90% because the other 10% of the measurement is likely to be affected by randomness.
However, the response from neuroscientist Mbemba Jabbi and colleagues suggest that this should be based on the maximum reliability ever found.
Vul et al. argue that many of the brain-behavior correlations published in social neuroscience articles are “impossibly high” and that “the highest possible meaningful correlation that could be obtained would be .74”. This categorical claim is based on a statistical upper bound argument which relies on the questionable assumption that “fMRI measures will not often have reliabilities greater than about .7”. However, logically, any theoretical upper bound argument would have to be based on the highest reliability values ever reported for behavioural and fMRI data, respectively (e.g. for fMRI, near-perfect reliabilities of 0.98 have been reported in Fernandez et al. 2003).
I think they’ve caricatured the argument a little bit here. Vul’s point was that most studies suggest an average reliability of .7, therefore, it becomes increasingly unlikely as correlations exceed this limit that they reflect genuine relationships.
It’s not a ‘this is strictly impossible’ argument, it’s a ‘it’s too unlikely to believe’ argument.
However, the majority of ripostes, that Vul and his colleagues have misunderstood the analysis process, are quite a counterpunch to the heavyweight criticisms.
As an aside, there’s an interesting comment from neuroscientist Tania Singer on how the study has been discussed:
“I first heard about this when I got a call from a journalist,” comments neuroscientist Tania Singer of the University of Zurich, Switzerland, whose papers on empathy are listed as examples of bad analytical practice. “I was shocked, this is not the way that scientific discourse should take place.”
Since when? The paper was accepted by a peer-reviewed journal before it was released to the public. The idea that something actually has to appear in print before anyone is allowed to discuss it seems to be a little outdated (in fact, was this ever the case?).
UPDATE: Ed Vul has replied to the rebuttal online. You can read his responses here (via the BPSRD which also has a good piece on the controversy).
It’s interesting that Vul’s reply essentially makes the counter-claim that the ‘red list’ researchers have misunderstood the analysis process.
This really highlights the point that neuroimaging analysis is not only at the forefront of the understanding of neurophysiology, but also at the forefront of the development of statistical methods.
In other words, the maths ‘aint obvious because the data sets are large, complex, and inter-related in ways we don’t fully understand. We’re still developing methods to make sense of these. This controversy is part of that process.
pdf of academic reply to ‘Voodoo correlations’ paper (thanks Alex!)
Link to excellent Nature article on the controversy.
5 thoughts on “Voodoo accusations false, reply ‘red list’ researchers”
I rather enjoyed the Vul et al article and rejoinder, but I cringe now to hear that they did not bother to contact the authors listed in their report, even after they’d clearly already been in touch with every author via their survey.
While there was no official onus on the Vul et al team to contact the authors listed in their critique to notify these authors of the accepted Vul et al paper, doing so would have been a polite way of avoiding the type of reaction experienced by Dr. Singer and thereby reduce the likelihood of serious animosity in an already critical context.
Here is our invited reply
Click to access LiebermanBerkmanWager(invitedreply).pdf
We were invited by the same journal to write an official reply to the Vul et al. paper. Here is a link.
Click to access LiebermanBerkmanWager(invitedreply).pdf
Dr. Matthew Lieberman and Dr. Naomi Eisenberger are guilty of horrible unethical research misconduct! They conduct environmentally manipulated unethical research experiments on unconsenting and unknowing individuals (in real time) while these individuals are living out their daily lives!
They have people , that will go out ,and deliberately manipulate a person’s (real-time) social environment in an extremely negative, unpleasant and intolerable way that causes that individual to be faced with problems that will cause them to suffer from long term social and emotional distress!
They purposely manipulate an individuals daily social environment in a negative way, so that they will begin to suffer extreme amounts of emotional and visceral pain on a daily basis …all, due to the problems that dr. Matthew Lieberman and dr. Naomi Eisenberger’s group has deliberately caused for them!
Their victims begins to suffer long episodes of rejection, isolation, ostracism, loss, abandonment and cannot find any social support. They are not told why or who has done this to them! Their lives are deliberately destroyed !They begin to suffer extreme amounts of pain all …so , Dr. Matthew Lieberman and his wife, UCLA’s Dr. Naomi Eisenberger can get a more original view of individuals suffering from social distress! The focus of their research experiments!
Dr. Matthew Lieberman and Dr. Naomi Eisenberger ‘ s research projects need to be investigated ,
shut down and held accountable for the lives and health of the individuals that they have tortured and destroyed all for their own financial greed, job security and to manipulate a way to get their research published in journals!
How have they discovered that how your brain responds to social stressors can influence the body’s immune system in ways that may negatively affect health ? They are trying to discover this by ,deliberately causing un-consenting subjects to suffer extreme amounts of long term social dis-tressors while they go about interacting in their real-time daily lives!
What UCLA’s Dr. Naomi Eisenberger and Dr. Matthew Lieberman do to obtain their research data is…
1. With the help of people they hire from a company named “North Vector Inc”..they deliberately make a unknowing subject’s real-time social environment so intolerable ,so the subject will begin to suffer extreme amounts of long term distress! Soon, the subject is tortured so badly by their systematic stressors that they lose everything which makes them suffer from large amounts of loss ..ect..and soon find their real-time lives destroyed …so, these horrible Social Psychologists can observe and map these subjects as they go thru the painful stages of deterioration that one suffers when they are suffering from long term distress! Their research projects must be stopped!