The Neurocritic has an excellent post explaining the science of why some of the most widely reported brain scanning studies on social interaction are flawed.
The new analysis has been led by neuroscientist Edward Vul and we reported on this bombshell last week, but this new post clearly explains the problems for those not wanting to plough through the original academic text.
The paper stems from the observation that some of the correlations between brain activity and psychological states in some of these headline studies are remarkably high, one as high as .88
A correlation is a test of how much two measures are related. A correlation of 1 means that the two measures are perfectly in sync, every change in one is mirrored by changed in the other, whereas a correlation of 0 means that there is no syncing at all. Any number in between gives a sliding scale of how much ‘syncing’ there is .
So a correlation of .88 is pretty impressive and suggest near-perfect syncing. Except that it’s higher than would be possible based on how accurate the two measures are.
Imagine that you have a 10cm rule than can only measure to the nearest centimetre. It means that the accuracy of your ruler is only 90% because it fudges any part-centimetre length down the nearest centimetre.
It would be almost impossible to get a perfect correlation using this ruler, because there’s 10% randomness – or 10% out-of-syncness, in every measurement.
And once you know how much randomness there is, you can estimate the maximum correlation you can get because you know the randomness is not going to reliably sync with anything else.
Edward Vul and his team did this with these headline social brain imaging studies and found that some produced correlations higher than would be possible from what we know of how accurate the brain scanning and psychological measures are. So something must be up.
It turns out that some studies deliberately picked out brain areas based on which voxels [micro areas] already had high correlations, while others only reported correlations from a spot in an area that was already the most active.
In other words, they were only selecting the cream of the crop but were reporting it as if it was the general picture.
Neurocritic goes into this in more detail in relation to specific studies, and it’s well worth checking out for the gory details.
Importantly, the researchers of the flawed studies weren’t trying to ‘fake’ results, there were using a common method which Vul has discovered is flawed.
He has called for the researchers to use a more representative form of analysis and correct their findings. We’ll see what happens.