Believing everyone else is wrong is a danger sign

I have a guest post for the Research Digest, snappily titled ‘People who think their opinions are superior to others are most prone to overestimating their relevant knowledge and ignoring chances to learn more‘. The paper I review is about the so-called “belief superiority” effect, which is defined by thinking that your views are better than other people’s (i.e. not just that you are right, but that other people are wrong). The finding that people who have belief superiority are more likely to overestimate their knowledge is a twist on the famous Dunning-Kruger phenomenon, but showing that it isn’t just ignorance that predicts overconfidence, but also the specific belief that everyone else has mistaken beliefs.

Here’s the first lines of the Research Digest piece:

We all know someone who is convinced their opinion is better than everyone else’s on a topic – perhaps, even, that it is the only correct opinion to have. Maybe, on some topics, you are that person. No psychologist would be surprised that people who are convinced their beliefs are superior think they are better informed than others, but this fact leads to a follow on question: are people actually better informed on the topics for which they are convinced their opinion is superior? This is what Michael Hall and Kaitlin Raimi set out to check in a series of experiments in the Journal of Experimental Social Psychology.

Read more here: ‘People who think their opinions are superior to others are most prone to overestimating their relevant knowledge and ignoring chances to learn more

 

How To Become A Centaur

Nicky Case (of Explorable Explanations and Parable of the Polygons internet fame) has a fantastic essay which picks up on the theme of my last Cyberselves post – technology as companion, not competitor.

In How To Become A Centaur Case gives blitz history of AI, and of its lesser known cousin IA – Intelligence Augmentation. The insight that digital technology could be a a ‘bicycle for the mind’ (Steve Jobs’ quote) gave us the modern computer, as shown in the 1968 Mother of All Demos which introduced the world to the mouse, hypertext, video conferencing and collaborative working. (1968 people! 1968! As Case notes, 44 years before google docs, 35 years before skype).

We’re living in the world made possible by Englebart’s demo. Digital tools, from mere phones to the remote presence they enable, or the remote action that robots are surely going to make more common, and as Case says:

a tool doesn’t “just” make something easier — it allows for new, previously-impossible ways of thinking, of living, of being.

And the vital insight is that the future will rely on identifying the strengths and weakness of natural and artificial cognition, and figuring out how to harness them together. Case again:

When you create a Human+AI team, the hard part isn’t the “AI”. It isn’t even the “Human”.

It’s the “+”.

The article is too good to try to summarise. Read the full text here

Cross-posted at the Cyberselves blog.

Previously: Tools, substitutes or companions: three metaphors for thinking about technology, Cyberselves: How Immersive Technologies Will Impact Our Future Selves

The backfire effect is elusive

The backfire effect is when correcting misinformation hardens, rather than corrects, someone’s mistaken belief. It’s a relative of so called ‘attitude polarisation’ whereby people’s views on politically controversial topics can get more, not less, extreme when they are exposed to counter-arguments.

The finding that misperception are hard to correct is not new – it fits with research on the tenacity of beliefs and the difficulty of debunking.

The backfire effect appears to give an extra spin on this. If backfire effects hold, then correcting fake news can be worse than useless – the correction could reinforce the misinformation in people’s minds. This is what Brendan Nyhan and Jason Reifler warned about in a 2010 paper ‘When Corrections Fail: The Persistence of Political Misperceptions’.

Now, work by Tom Wood and Ethan Porter suggests that backfire effects may not be common or reliable. Reporting in their ‘The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence’ they exposed over 10,000 mechanical turk participants, over 5 experiments and 52 different topics, to misleading statements from American politicians from both of the two main parties. Across all statements, and all experiments, they found that showing people corrections moved their beliefs away from the false information. There was an effect of the match between the ideology of the participant and of the politician, but it wasn’t large:

Among liberals, 85% of issues saw a significant factual response to correction, among moderates, 96% of issues, and among conservatives, 83% of issues. No backfire was observed for any issue, among any ideological cohort

All in all, this suggests, in their words, that ‘The backfire effect is far less prevalent than existing research would indicate’. Far from being counter-productive, corrections work. Part of the power of this new study is that it uses the same materials and participants as the 2010 paper reporting backfire effects – statements about US politics and US citizens. Although the numbers mean the new study in convincing, it doesn’t show the backfire effect will never occur, especially for different attitudes in different contexts or nations.

So, don’t give up on fact checking just yet – people are more more reasonable about their beliefs than the backfire suggests.

Original paper: Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330.

New studies: Wood, T., & Porter, E. (in press). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behaviour.

The news is also good in a related experiment on fake news by the same team: Sex Trafficking, Russian Infiltration, Birth Certificates, and Pedophilia: A Survey Experiment Correcting Fake News. Regardless of ideology or content of fake news, people were responsive to corrections.

Read more about the psychology of responsiveness to argument in my ‘For argument’s sake: evidence that reason can change minds’.

Conspiracy theories as maladaptive coping

A review called ‘The Psychology of Conspiracy Theories‘ sets out a theory of why individuals end up believing Elvis is alive, NASA faked the moon landings or 9/11 was an inside job. Karen Douglas and colleagues suggest:

Belief in conspiracy theories appears to be driven by motives that can be characterized as epistemic (understanding one’s environment), existential (being safe and in control of one’s environment), and social (maintaining a positive image of the self and the social group).

In their review they cover evidence showing that factors like uncertainty about the world, lack of control or social exclusion (factors affecting epistemic, existential and social motives respectively) are all associated with increased susceptibility to conspiracy theory beliefs.

But also they show, paradoxically, that exposure to conspiracy theories doesn’t salve these needs. People presented with pro-conspiracy theory information about vaccines or climate change felt a reduced sense of control and increased disillusion with politics and distrust of government. Douglas’ argument is that although individuals might find conspiracy theories attractive because they promise to make sense of the world, they actually increase uncertainty and decrease the chance people will take effective collective action.

My take would be that, viewed like this, conspiracy theories are a form of maladaptive coping. The account makes sense of why we are all vulnerable to conspiracy theories – and we are all vulnerable; many individual conspiracy theories have very widespread subscription – for example half of Americans believe Lee Harvey Oswald did not act alone in the assassination of JFK. Of course polling about individual beliefs must underestimate the proportion of individuals who subscribe to at least one conspiracy theory. The account also makes sense of why some people are more susceptible than others – people who have less education, are more excluded or powerless and have a heightened need to see patterns which aren’t necessarily there.

There are a few areas where this account isn’t fully satisfying.
– it doesn’t really offer a psychologically grounded definition of conspiracy theories. Douglas’s working definition is ‘explanations for important events that involve secret plots by powerful and malevolent groups’, which seems to include some cases of conspiracy beliefs which aren’t ‘conspiracy theories’ (sometimes it is reasonable to believe in secret plots by the powerful; sometimes the powerful are involved in secret plots), and it seems to miss some cases of conspiracy-theory type reasoning (for example paranoid beliefs about other people in your immediate social world).
– one aspects of conspiracy theories is that they are hard to disprove, with, for example, people presenting contrary evidence seem as confirming the existence of the conspiracy. But the common psychological tendency to resist persuasion is well known. Are conspiracy theories especially hard to shift, any more than other beliefs (or the beliefs of non-conspiracy theorists)? Would it be easier to persuade you that the earth is flat than it would be to persuade a flat-earther that the earth is round? If not, then the identifying mark of conspiracy theories may be the factors that lead you to get into them, rather that their dynamics when you’ve got them.
– and how you get into them seems crucially unaddressed by the experimental psychology methods Douglas and colleagues deploy. We have correlational data on the kinds of people who subscribe to conspiracy theories, and experimental data on presenting people with conspiracy theories, but no rich ethnographic account of how individuals find themselves pulled into the world of a conspiracy theory (or how they eventually get out of it).

Further research is, as they say, needed.

Reference: Douglas, K., Sutton, R. M., & Cichocka, A. (2017). The psychology of conspiracy theories. Current Directions in Psychological Science, 26 (6), 538-542.

Karen Douglas’ homepage

Previously on mindhacks.com: Conspiracy theory as character flaw, That’s what they want you to believe. Conspiracy theory page on mindhacks wiki.

I saw Karen Douglas present this work at a talk to Sheffield Skeptics in the Pub. Thanks to them for organising.

The Social Priming Studies in “Thinking Fast and Slow” are not very replicable

train_wreck_at_montparnasse_1895In Daniel Kahneman’s “Thinking Fast and Slow” he introduces research on social priming – the idea that subtle cues in the environment may have significant, reliable effects on behaviour. In that book, published in 2011, Kahneman writes “disbelief is not an option” about these results. Since then, the evidence against the reliability of social priming research has been mounting.

In a new analysis, ‘Reconstruction of a Train Wreck: How Priming Research Went off the Rails‘, Ulrich Schimmack, Moritz Heene, and Kamini Kesavan review chapter 4 of Thinking Fast and Slow, picking out the references which provide evidence for social priming and calculating how statistically reliable they:

Their conclusion:

The results are eye-opening and jaw-dropping.  The chapter cites 12 articles and 11 of the 12 articles have an R-Index below 50.  The combined analysis of 31 studies reported in the 12 articles shows 100% significant results with average (median) observed power of 57% and an inflation rate of 43%.  …readers of… “Thinking Fast and Slow” should not consider the presented studies as scientific evidence that subtle cues in their environment can have strong effects on their behavior outside their awareness.

The argument is that the pattern of 100% significant results is near to impossible, even if the effects known were true, given the weak statistical power of the studies to detect true effects.

Remarkably, Kahneman responds in the comments:

What the blog gets absolutely right is that I placed too much faith in underpowered studies. …I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested.

The original analysis, and Kahneman’s response are worth reading in full. Together they give a potted history of the replication crisis, and a summary of some of the prime causes (e.g. file draw effects), as well as showing off how mature psychological scientists can make, and respond to critique.

Original analysis: ‘Reconstruction of a Train Wreck: How Priming Research Went off the Rails‘, Ulrich Schimmack, Moritz Heene, and Kamini Kesavan. (Is it a paper? Is it a blogpost? Who knows?!)

Kahneman’s response

How to overcome bias

How do you persuade somebody of the facts? Asking them to be fair, impartial and unbiased is not enough. To explain why, psychologist Tom Stafford analyses a classic scientific study.

One of the tricks our mind plays is to highlight evidence which confirms what we already believe. If we hear gossip about a rival we tend to think “I knew he was a nasty piece of work”; if we hear the same about our best friend we’re more likely to say “that’s just a rumour”. If you don’t trust the government then a change of policy is evidence of their weakness; if you do trust them the same change of policy can be evidence of their inherent reasonableness.

Once you learn about this mental habit – called confirmation bias – you start seeing it everywhere.

This matters when we want to make better decisions. Confirmation bias is OK as long as we’re right, but all too often we’re wrong, and we only pay attention to the deciding evidence when it’s too late.

How we should to protect our decisions from confirmation bias depends on why, psychologically, confirmation bias happens. There are, broadly, two possible accounts and a classic experiment from researchers at Princeton University pits the two against each other, revealing in the process a method for overcoming bias.

The first theory of confirmation bias is the most common. It’s the one you can detect in expressions like “You just believe what you want to believe”, or “He would say that, wouldn’t he?” or when the someone is accused of seeing things a particular way because of who they are, what their job is or which friends they have. Let’s call this the motivational theory of confirmation bias. It has a clear prescription for correcting the bias: change people’s motivations and they’ll stop being biased.

The alternative theory of confirmation bias is more subtle. The bias doesn’t exist because we only believe what we want to believe, but instead because we fail to ask the correct questions about new information and our own beliefs. This is a less neat theory, because there could be one hundred reasons why we reason incorrectly – everything from limitations of memory to inherent faults of logic. One possibility is that we simply have a blindspot in our imagination for the ways the world could be different from how we first assume it is. Under this account the way to correct confirmation bias is to give people a strategy to adjust their thinking. We assume people are already motivated to find out the truth, they just need a better method. Let’s call this the cognition theory of confirmation bias.

Thirty years ago, Charles Lord and colleagues published a classic experiment which pitted these two methods against each other. Their study used a persuasion experiment which previously had shown a kind of confirmation bias they called ‘biased assimilation’. Here, participants were recruited who had strong pro- or anti-death penalty views and were presented with evidence that seemed to support the continuation or abolition of the death penalty. Obviously, depending on what you already believe, this evidence is either confirmatory or disconfirmatory. Their original finding showed that the nature of the evidence didn’t matter as much as what people started out believing. Confirmatory evidence strengthened people’s views, as you’d expect, but so did disconfirmatory evidence. That’s right, anti-death penalty people became more anti-death penalty when shown pro-death penalty evidence (and vice versa). A clear example of biased reasoning.

For their follow-up study, Lord and colleagues re-ran the biased assimilation experiment, but testing two types of instructions for assimilating evidence about the effectiveness of the death penalty as a deterrent for murder. The motivational instructions told participants to be “as objective and unbiased as possible”, to consider themselves “as a judge or juror asked to weigh all of the evidence in a fair and impartial manner”. The alternative, cognition-focused, instructions were silent on the desired outcome of the participants’ consideration, instead focusing only on the strategy to employ: “Ask yourself at each step whether you would have made the same high or low evaluations had exactly the same study produced results on the other side of the issue.” So, for example, if presented with a piece of research that suggested the death penalty lowered murder rates, the participants were asked to analyse the study’s methodology and imagine the results pointed the opposite way.

They called this the “consider the opposite” strategy, and the results were striking. Instructed to be fair and impartial, participants showed the exact same biases when weighing the evidence as in the original experiment. Pro-death penalty participants thought the evidence supported the death penalty. Anti-death penalty participants thought it supported abolition. Wanting to make unbiased decisions wasn’t enough. The “consider the opposite” participants, on the other hand, completely overcame the biased assimilation effect – they weren’t driven to rate the studies which agreed with their preconceptions as better than the ones that disagreed, and didn’t become more extreme in their views regardless of which evidence they read.

The finding is good news for our faith in human nature. It isn’t that we don’t want to discover the truth, at least in the microcosm of reasoning tested in the experiment. All people needed was a strategy which helped them overcome the natural human short-sightedness to alternatives.

The moral for making better decisions is clear: wanting to be fair and objective alone isn’t enough. What’s needed are practical methods for correcting our limited reasoning – and a major limitation is our imagination for how else things might be. If we’re lucky, someone else will point out these alternatives, but if we’re on our own we can still take advantage of crutches for the mind like the “consider the opposite” strategy.

This is my BBC Future column from last week. You can read the original here. My ebook For argument’s sake: Evidence that reason can change minds is out now.

echo chambers: old psych, new tech

If you were surprised by the result of the Brexit vote in the UK or by the Trump victory in the US, you might live in an echo chamber – a self-reinforcing world of people who share the same opinions as you. Echo chambers are a problem, and not just because it means some people make incorrect predictions about political events. They threaten our democratic conversation, splitting up the common ground of assumption and fact that is needed for diverse people to talk to each other.

Echo chambers aren’t just a product of the internet and social media, however, but of how those things interact with fundamental features of human nature. Understand these features of human nature and maybe we can think creatively about ways to escape them.

Built-in bias

One thing that drives echo chambers is our tendency to associate with people like us. Sociologists call this homophily. We’re more likely to make connections with people who are similar to us. That’s true for ethnicity, age, gender, education and occupation (and, of course, geography), as well as a range of other dimensions. We’re also more likely to lose touch with people who aren’t like us, further strengthening the niches we find ourselves in. Homophily is one reason obesity can seem contagious – people who are at risk of gaining weight are disproportionately more likely to hang out with each other and share an environment that encourages obesity.

Another factor that drives the echo chamber is our psychological tendency to seek information that confirms what we already know – often called confirmation bias. Worse, even when presented with evidence to the contrary, we show a tendency to dismiss it and even harden our convictions. This means that even if you break into someone’s echo chamber armed with facts that contradict their view, you’re unlikely to persuade them with those facts alone.

News as information and identity

More and more of us get our news primarily from social media and use that same social media to discuss the news.

Social media takes our natural tendencies to associate with similar minded people and seek information that confirms and amplifies our convictions. Dan Kahan, professor of law and psychology at Yale, describes each of us switching between two modes of information processing – identity affirming and truth seeking. The result is that for issues that, for whatever reasons, become associated with a group identity, even the most informed or well educated can believe radically different things because believing those things is tied up with signalling group identity more than a pursuit of evidence.

Mitigating human foibles

Where we go from here isn’t clear. The fundamentals of human psychology won’t just go away, but they do change depending on the environment we’re in. If technology and the technological economy reinforce the echo chamber, we can work to reshape these forces so as to mitigate it.

We can recognise that a diverse and truth-seeking media is a public good. That means it is worth supporting – both in established forms like the BBC, and in new forms like Wikipedia and The Conversation.

We can support alternative funding models for non-public media. Paying for news may seem old-fashioned, but there are long-term benefits. New ways of doing it are popping up. Services such as Blendle let you access news stories that are behind a pay wall by offering a pay-per-article model.

Technology can also help with individual solutions to the echo chamber, if you’re so minded. For Twitter users, otherside.site let’s you view the feed of any other Twitter user, so if you want to know what Nigel Farage or Donald Trump read on Twitter, you can. (I wouldn’t bother with Trump. He only follows 41 people – mostly family and his own businesses. Now that’s an echo chamber.)

For Facebook users, politecho.org is a browser extension that shows the political biases of your friends and Facebook newsfeed. If you want a shortcut, this Wall Street Journal article puts Republican and Democratic Facebook feeds side-by-side.

Of course, these things don’t remove the echo chamber, but they do highlight the extent to which you’re in one, and – as with other addictions – recognising that you have a problem is the first step to recovery.

The ConversationThis article was originally published on The Conversation. Read the original article.