A balanced look at brain scanning

Bioethics think tank The Hastings Center have published an excellent open-access report on ‘Interpreting Neuroimages: The Technology and its Limits’ that takes a critical but balanced look at the use of brain scans for understanding the mind.

They’ve commissioned leading cognitive neuroscientists to write chapters including Geoffrey Aguirre, Martha Farah and Helen Mayberg, as well as having a chapter by some legal folks who discuss whether neuroimaging can teach us anything about moral and legal responsibility.

The chapter by the brilliant Martha Farah is particularly good and takes a level-headed look at the critiques of fMRI and is essential reading if you want to get up to speed on what brain scans are likely to tell us about the mind and brain.

The report is all in academic writing but if you’re a dedicated neuroscience fan, it probably won’t pose too much of a problem.

 

Link to ‘Interpreting Neuroimages: The Technology and its Limits’.

4 Comments

  1. Posted March 23, 2014 at 2:34 pm | Permalink

    A good article overall but it gives a pretty rose-tinted view on the state of neuroimaging, in my humble opinion.

    Three quick points.

    1) The article perpetuates an incorrect definition of statistical significance. The author states (p S25): “statistical tests yield a “significance level,” which is the probability that the observed difference between two conditions was due to chance variation alone.” Not it isn’t. The p value is not the posterior probability of the null. One of the reasons people chase p values in various dodgy ways (in fMRI and beyond) is because they don’t understand what a p value is and the severe limitations it places on inference. A p value is not a Bayes factor.

    2) Many sections of the paper end by saying something like “but this problem isn’t unique to fMRI”, as though implying that this somehow mitigates the concern for fMRI research. I’ve never understood the logic of this position. It’s like say “we have problem with poverty in the UK but poverty is a worldwide problem so…[it's not our fault] [it's not for us to find a solution] [this means we shouldn't care as much].”

    3) The article overlooks the elephant in the room: known problems with researcher degrees of freedom in fMRI studies, highlighted by Josh Carp and others, e.g. http://journal.frontiersin.org/Journal/10.3389/fnins.2012.00149/full
    This is a huge problem that invalidates p values across vast swathes of fMRI research, yet it warrants not a single mention. Why?

  2. DS
    Posted March 24, 2014 at 6:44 pm | Permalink

    In this series of papers only one mentions problems with the data itself and it does in a manner that suggests that there is no problem. See Geoffrey K. Aguirre article and his discussion of motion artefacts. Note to all that could possibly be mislead by this simplistic critique: The application of algorithms ostensibly designed to correct motion artefact is not known to produce images sufficiently free of motion artefact.

  3. Posted March 27, 2014 at 4:02 pm | Permalink

    Geoff Aguirre here. Motion is certainly a challenge for many neuroimaging studies, particularly for “resting state” measures between groups. But does DS wish to claim that the possibility of motion artifacts renders the entire neuroimaging enterprise suspect?

    Regarding Chris’ points: p-hacking is absolutely a problem (and mentioned in my piece). My impression is that its prevalence varies by scientific discipline, related to how mature neuroimaging technique is to the group, but you may have other opinions. The point of indicating that these problems are not unique to neuroimaging is not to dismiss the urgent need to correct these flaws but instead to indicate that these problems do not invalidate neuroimaging per se as a scientific enterprise, just as p-hacking in genomics not invalidate genetics as a whole.

  4. rich
    Posted March 27, 2014 at 10:08 pm | Permalink

    I don’t understand Chris Chamber’s 1st criticism above. The p-value is the probability of obtaining the result under the null hypothesis. Generally, the null can be described as ‘chance variation’. I don’t think the original author is perpetuating an incorrect definition here. But I’m not a statistician so feel free to correct me.


Post a Comment

Required fields are marked *
*
*

Follow

Get every new post delivered to your Inbox.

Join 23,875 other followers