I predict a riot (based on a single study)

A group of black bloc researchers fed up with the lack of interest in replicating psychology studies has set up a strike force called the The Reproducibility Project that will recreate all 2008 studies from three major cognitive science journals.

That sound you can hear. That’s shit hitting the fan.

The Chronicle of Higher Education covers the project that’ll check-out the replicability of well-known studies.

So why not check? Well, for a lot of reasons. It’s time-consuming and doesn’t do much for your career to replicate other researchers’ findings. Journal editors aren’t exactly jazzed about publishing replications. And potentially undermining someone else’s research is not a good way to make friends.

Brian Nosek knows all that and he’s doing it anyway. Nosek, a professor of psychology at the University of Virginia, is one of the coordinators of the project. He’s careful not to make it sound as if he’s attacking his own field. “The project does not aim to single out anybody,” he says. He notes that being unable to replicate a finding is not the same as discovering that the finding is false. It’s not always possible to match research methods precisely, and researchers performing replications can make mistakes, too.

But still. If it turns out that a sizable percentage (a quarter? half?) of the results published in these three top psychology journals can’t be replicated, it’s not going to reflect well on the field or on the researchers whose papers didn’t pass the test. In the long run, coming to grips with the scope of the problem is almost certainly beneficial for everyone. In the short run, it might get ugly.

Unfortunately, psychology and science in general still see a non-replication as a failure (in fact, we even use the term ‘failed replication’).

This is clearly nonsense and checking the original finding is equally as valuable if the new data agree with, or disagree with, the original study.

Sadly, we’ll have to change the attitude of several generations of scientists to reset this rusty conceptual switch.

The Reproducibility Project have just got frustrated with the entrenched attitudes and have manned the barricades. And who cam blame them?

Link to Chronicle article on The Reproducibility Project.


  1. ANG Guy
    Posted April 19, 2012 at 10:14 pm | Permalink

    “Rusty”??? I would think the headshrinkers would jump up and down in joy that someone is going to the trouble of adding legitimacy to their field… Unless there actually are significant problems…

    Almost all the “hard” science types I have ever met consider all social sciences a joke. There has been a lot of progress in the last 15 years, especially in new technologies to aid in cognitive research. This is GOOD news.

    • Posted April 20, 2012 at 12:06 am | Permalink

      I don’t think there is a dispute that as a matter of principle this a good thing, the point made above as I understand it is that it will take a long time before this may take off because of old fashioned attitudes to the value of replications.

      Also, if you don’t think this is going to cause a stir check out the recent Bargh affair.

  2. Posted April 20, 2012 at 6:37 am | Permalink

    “Unfortunately, psychology and science still see a non-replication as a failure” Srsly? That’s called pseudoscience.

    • Posted April 21, 2012 at 10:32 pm | Permalink

      It is important to distinguish failures-to-replicate that occur because there is a real problem (psuedoscience, fabrication, etc.), from failures-to-replicate that are inevitable in a statistics-driven science. So, if this effort finds 1 out of 100 studies fail to replicate, that will actually beat expectations. On the other hand, I suspect it will be more much worse than that.

      If psychology was acting like a real science, and the rate of replication failures was low, we would not consider non-replication as a failure. We would consider the author of the original study to be a good, hard working member of the field, and the person who did the replication a good, hard working member of the field. We would also appreciate it if, assuming the issue was deemed important, a third researcher did a more comprehensive study to settle the matter. If the third party had more than enough power for the non-replication to be conclusive, that wouldn’t be a failure either.

  3. Posted April 20, 2012 at 1:15 pm | Permalink

    “Unfortunately, psychology and science in general still see a non-replication as a failure (in fact, we even use the term ‘failed replication’).

    This is clearly nonsense and checking the original finding is equally as valuable if the new data agree with, or disagree with, the original study.”

    Would you please expand on this point? My that reproducibility is a foundation of science, so I don’t understand the ‘unfortunately’ in the first paragraph, nor the ‘this is clearly nonsense’ in the second.

    • Posted April 21, 2012 at 7:58 am | Permalink

      Discovering that an effect does not hold by not replicating the study is valued less in scientific and career terms than the first reporting of the data.

      This is what drives the ‘file drawer effect’ and is why replications (or non-replications) are hard to publish.

      So non-replication is a success for science (because we have more data about the world) even if it is a failure for the original theory.

      Unfortunately, these two aspects are not considered sufficiently separately at the moment.

      • Posted April 22, 2012 at 10:06 am | Permalink

        Maybe you should revise the wording in the article itself to say this for the benefit of future readers who may not read the comments. I was as confused as ficial and started wondering if you had turned quack.

  4. Posted April 20, 2012 at 5:53 pm | Permalink

    It’s a superb idea. I worry when single studies get widely disseminated, because maybe if something seems popular and widely used it’s easier to assume that it’s well supported.

    Regardless of whether people are happy I think we have to do it… I know it’s hard but researchers must be less attached to the outcome of their work as they are to the process. Put all your effort and pride in the process and let the chips fall where they may.

  5. Posted April 21, 2012 at 6:40 pm | Permalink

    Reblogged this on PHILOSOPHY & POLITY and commented:
    Courtesy of Mind Hacks.

Post a Comment

Required fields are marked *


Get every new post delivered to your Inbox.

Join 26,901 other followers