Who could possibly be against replication of research results? Jason Mitchell of Harvard University is, under some conditions, for reasons described in his essay On the emptiness of failed replications.
I wrote something for the Centre for Open Science which tries to draw out the sensible points in Mitchell’s essay – something I thought worth doing since for many people being against replication in science is like being against motherhood and apple pie. It’s worth noting that I was invited to do this by Brian Nosek, who is co-founder of the Center for Open Science and instrumental in the Many Labs projects. As such, Brian is implicitly one of the targets of Mitchell’s criticisms, so kudos to him for encouraging this discussion.
Here’s my commentary: What Jason Mitchell’s ‘On the emptiness of failed replications’ gets right
3 thoughts on “Motherhood, apple pie and replication”
Another critique, by Professor of Mathematical Statistics Olle Häggström: http://haggstrom.blogspot.se/2014/07/on-value-of-replications-jason-mitchell.html
I understand that it may be difficult to replicate an experiment, but if positive results are truly hard-won rather than random, then experimenter ought to be able to explain the “sources of experimental success.” That should NOT be the job of the skeptic.
In Mitchell’s example, the explorer needs at least to say that the black swan was found in Australia.
I also see nothing wrong with specifically targeting surprising positive results. Especially if experiments are not robust. If the effect is random, replication is needed to prove it so. If the effect is real, replication is needed to get at those sources of success.
This is particularly true if the “honest” experimenter’s positive result is fragile. Consider, Mitchell argues that researchers have strong ex-ante biases toward the positive results they report– meaning most of the USEFUL science that they do is in uncovering the “bungled” “details” in their experiments and NOT in obtaining the final result.
So right about weak observations. If I understand behavior studies at all, then a hypothesis seems less useful than merely observing. How could research that has so many potential interpretations of findings possibly accept or reject a statement of relationship? I’m speaking from zero experience here by the way.
I’ve taken part in bird field studies though (relative abundance, not behavior) and the number of field seasons is, ideally, as many as possible.
Furthermore, while my thesis turned up nothing for significant results (more field seasons and I’d get something, I’m sure) we did analyze a completely different set of variables and found an unexpected significant result. That’s the kind of surprise finding that makes science so cool and that should be abundant in behavior studies.
What boggles the mind is when researchers choose to do “a study” and it’s somehow treated as revelatory. If it lends to theory, great but as an isolated finding without context it doesn’t seem to have much credibility.