There’s an interesting article in Wired about how scientists deal with data that conflicts with their expectations and whether biases in how the brain deals with contradictory information might influence scientific reasoning.
The piece is based on the work of Kevin Dunbar who combines the sociology of science with the cognitive neuroscience of scientific reasoning.
In other words, he’s trying to understand what scientists actually do to make their discoveries (rather than what they say they do, or what they say they should do) and whether there are specific features of the way the brain handles reasoning that might encourage these practices.
One of his main findings is that when experimental results appear that can’t be explained, they’re often discounted as being useless. The researchers might say that the experiment was designed badly, the equipment faulty, and so on.
It may indeed be the case the faults occurred, but it could also be the case when consistent information emerges, but these possibilities are rarely investigated when the data agrees with pre-existing assumptions, leading to possible biases in how data is interpreted.
Dunbar is not the first to tackle this issue. In fact, the first to do so is probably one of the most important but unrecognised philosophers of science, Charles Fort, who is typically associated with ‘Fortean’ or anomalous phenomena – such as fish falling from the sky.
Fort did indeed collect reports of all types of anomalous phenomena (interestingly, almost all from scientific journals) and used them as a critique of the scientific method – noting that while scientists say they reason from the data to theories about the world, what they actually do is filter the data in light of their theories and frequently ignore information that contradicts existing assumptions – hence, ‘damning’ some data as unacceptable.
This was later echoed when philosophers and sociologists started studying the scientific community in the 20th century, noting that the scientific method was not a clear practice but more of a tool in a wider consensus-forming toolbox.
Probably the most important thinker in this regard, not mentioned in the Wired article, was the philosopher Paul Feyerabend who noted that researchers regularly violate the ‘rules’ of science and this actually promotes progress rather than impedes it.
The article goes on to discuss research suggesting that part of this bias for information consistent with our assumptions may be due to differences in the way the brain handles this information.
Curiously, the piece mentions a 2003 study, where students were apparently asked to select the more accurate representation of gravity in an fMRI scanner, but unfortunately, I can find no trace of it.
However, a 2005 study by the same team, where participants where asked to match theories supported to different degrees by the data they’d seen (to do with how drugs relieve depression), came to similar conclusions. Namely, that brain activity is markedly different when we receive information that confirms our theories compared to when we receive information that challenges them.
In particular, contradictory information seems to activate an area deep in the frontal lobe (the ACC) often associated with ‘conflict monitoring’, along with an outer area of the frontal lobe (the DLPFC) associated with sorting out conflicting information, likely by filtering out some of the incompatible data so it is less likely to be registered or remembered.
There is clearly much more to scientific reasoning than this, as it is vast and complex both within individual researchers and between groups of people. I was particularly interested to read that breakthroughs were most likely to come from group discussions:
While the scientific process is typically seen as a lonely pursuit — researchers solve problems by themselves — Dunbar found that most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn’t the presentation — it was the debate that followed. Dunbar observed that the skeptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they’d previously ignored. The new theory was a product of spontaneous conversation, not solitude; a single bracing query was enough to turn scientists into temporary outsiders, able to look anew at their own work.
Although it turns out that discussion with people from a diverse range of people is most important – having a room full of people who share assumptions and expertise tends not to lead to creative scientific insights.
Link to Wired article on scientific reasoning.