The best way to win an argument

How do you change someone’s mind if you think you are right and they are wrong? Psychology reveals the last thing to do is the tactic we usually resort to.

You are, I’m afraid to say, mistaken. The position you are taking makes no logical sense. Just listen up and I’ll be more than happy to elaborate on the many, many reasons why I’m right and you are wrong. Are you feeling ready to be convinced?

Whether the subject is climate change, the Middle East or forthcoming holiday plans, this is the approach many of us adopt when we try to convince others to change their minds. It’s also an approach that, more often than not, leads to the person on the receiving end hardening their existing position. Fortunately research suggests there is a better way – one that involves more listening, and less trying to bludgeon your opponent into submission.

A little over a decade ago Leonid Rozenblit and Frank Keil from Yale University suggested that in many instances people believe they understand how something works when in fact their understanding is superficial at best. They called this phenomenon “the illusion of explanatory depth“. They began by asking their study participants to rate how well they understood how things like flushing toilets, car speedometers and sewing machines worked, before asking them to explain what they understood and then answer questions on it. The effect they revealed was that, on average, people in the experiment rated their understanding as much worse after it had been put to the test.

What happens, argued the researchers, is that we mistake our familiarity with these things for the belief that we have a detailed understanding of how they work. Usually, nobody tests us and if we have any questions about them we can just take a look. Psychologists call this idea that humans have a tendency to take mental short cuts when making decisions or assessments the “cognitive miser” theory.

Why would we bother expending the effort to really understand things when we can get by without doing so? The interesting thing is that we manage to hide from ourselves exactly how shallow our understanding is.

It’s a phenomenon that will be familiar to anyone who has ever had to teach something. Usually, it only takes the first moments when you start to rehearse what you’ll say to explain a topic, or worse, the first student question, for you to realise that you don’t truly understand it. All over the world, teachers say to each other “I didn’t really understand this until I had to teach it”. Or as researcher and inventor Mark Changizi quipped: “I find that no matter how badly I teach I still learn something”.

Explain yourself

Research published last year on this illusion of understanding shows how the effect might be used to convince others they are wrong. The research team, led by Philip Fernbach, of the University of Colorado, reasoned that the phenomenon might hold as much for political understanding as for things like how toilets work. Perhaps, they figured, people who have strong political opinions would be more open to other viewpoints, if asked to explain exactly how they thought the policy they were advocating would bring about the effects they claimed it would.

Recruiting a sample of Americans via the internet, they polled participants on a set of contentious US policy issues, such as imposing sanctions on Iran, healthcare and approaches to carbon emissions. One group was asked to give their opinion and then provide reasons for why they held that view. This group got the opportunity to put their side of the issue, in the same way anyone in an argument or debate has a chance to argue their case.

Those in the second group did something subtly different. Rather that provide reasons, they were asked to explain how the policy they were advocating would work. They were asked to trace, step by step, from start to finish, the causal path from the policy to the effects it was supposed to have.

The results were clear. People who provided reasons remained as convinced of their positions as they had been before the experiment. Those who were asked to provide explanations softened their views, and reported a correspondingly larger drop in how they rated their understanding of the issues. People who had previously been strongly for or against carbon emissions trading, for example, tended to became more moderate – ranking themselves as less certain in their support or opposition to the policy.

So this is something worth bearing in mind next time you’re trying to convince a friend that we should build more nuclear power stations, that the collapse of capitalism is inevitable, or that dinosaurs co-existed with humans 10,000 years ago. Just remember, however, there’s a chance you might need to be able to explain precisely why you think you are correct. Otherwise you might end up being the one who changes their mind.

This is my BBC Future column from last week. The original is here.

Research Digest #3: Getting to grips with implicit bias

My third and final post at the BPS Research Digest is now up: Getting to grips with implicit bias. Here’s the intro:

Implicit attitudes are one of the hottest topics in social psychology. Now a massive new study directly compares methods for changing them. The results are both good and bad for those who believe that some part of prejudice is our automatic, uncontrollable, reactions to different social groups.

All three studies I covered (#1, #2, #3) use large behavioural datasets, something I’m particularly keen on in my own work.

Link:  Getting to grips with implicit bias

Lou Reed has left the building

Chronicler of the wild side, Lou Reed, has died. Reed was particularly notable for students of human nature for his descriptions of drugs, madness and his own experience of psychiatry.

We’ve touched on his outrageous performance to the New York Society for Clinical Psychiatry before and his songs about or featuring drug use are legendary.

But there was one song that was particularly notable – not least for describing from his own experience of being ‘treated’ for homosexuality with electroshock therapy when he was a teenager.

Kill Your Sons, released in 1974 (audio), is just a straight-out attack on the psychiatrists that treated him:

All your two-bit psychiatrists
are giving you electroshock
They said, they’d let you live at home with mom and dad
instead of mental hospitals
But every time you tried to read a book
you couldn’t get to page 17
‘Cause you forgot where you were
so you couldn’t even read

Here Reed describes the effects on memory that are common just after electroconvulsive therapy. In this case, forgetting what you’ve just read.

The last verse also describes some of his other contacts with psychiatry, mentioning specific psychiatric clinics and medications:

Creedmore treated me very good
but Paine Whitney was even better
And when I flipped out on PHC
I was so sad, I didn’t even get a letter
All of the drugs, that we took
it really was lots of fun
But when they shoot you up with Thorazine on crystal smoke
you choke like a son of a gun

The last line seems to refer to the effect of being given a dopamine-inhibiting antipsychotic when you’re on a dopamine boosting amphetamine – presumably after being taken to a psychiatric clinic while still high. Not a pleasant comedown I would imagine.

I have no idea what ‘PHC’ refers to, though. I’m guessing it’s a psychiatric treatment from the 60s.

It’s interesting that the song was released the year after homosexuality was removed from the DSM in 1973, although it’s never been clear whether this was intentional on Reed’s part or not.

Link to YouTube audio of Kill Your Sons.

Race perception isn’t automatic

Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.

For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.

When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.

Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.

Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.

Powerful force

The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).

Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.

It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.

So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).

Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.

What does it take to spark prejudice?

Short answer: surprisingly little. Continuing the theme of revisiting classic experiments in psychology, last week’s BBC Future column was on Tajfel’s Minimal Group Paradigm. The original is here. Next week we’re going to take this foundation and look at some evolutionary psychology of racism (hint: it won’t be what you’d expect).

How easy is it for the average fair-minded person to form biased, preconceived views within groups? Surprisingly easy, according to psychology studies.

One of the least charming but most persistent aspects of human nature is our capacity to hate people who are different. Racism, sexism, ageism, it seems like all the major social categories come with their own “-ism”, each fuelled by regrettable prejudice and bigotry.

Our tendency for groupness appears to be so strong there seems little more for psychology to teach us. It’s not as if we need it proven that favouring our group over others is a common part of how people think – history provides all the examples we need. But one psychologist, Henri Tajfel, taught us something important. He showed exactly how little encouragement we need to treat people in a biased way because of the group they are in.

Any phenomenon like this in the real world comes entangled with a bunch of other, complicating phenomenon. When we see prejudice in the everyday world it is hard to separate out psychological biases from the effects of history, culture and even pragmatism (sometimes people from other groups really are out to get you).

As a social psychologist, Tajfel was interested in the essential conditions of group prejudice. He wanted to know what it took to turn the average fair-minded human into their prejudiced cousin.

He wanted to create a microscope for looking at how we think when we’re part of a group, even when that group has none of the history, culture or practical importance that groups normally do. To look at this, he devised what has become known as the “minimal group paradigm

The minimal group paradigm works like this: participants in the experiment are divided into groups on some arbitrary basis. Maybe eye-colour, maybe what kind of paintings they like, or even by tossing a coin. It doesn’t matter what the basis for group membership is, as long as everyone gets a group and knows what it is. After being told they are in a group, participants are divided up so that they are alone when they make a series of choices about how rewards will be shared among other people in the groups. From this point on, group membership is entirely abstract. Nobody else can be seen, and other group members are referred to by an anonymous number. Participants make choices such as “Member Number 74 (group A) to get 10 points and Member 44 (group B) to get 8 points”, versus “Member Number 74 (group A) to get 2 points and Member 44 (group B) to get 6 points”, where the numbers are points which translate into real money.

You won’t be surprised to learn that participants show favouritism towards their own group when dividing the money. People in group A were more likely to choose the first option I gave above, rather than the second. What is more surprising is that people show some of this group favouritism even when it ends up costing them points – so people in group B sometimes choose the second option, or options like it, even though it provides fewer points than the first option. People tend to opt for the maximum total reward (as you’d expect from the fair-minded citizen), but they also show a tendency to maximise the difference between the groups (what you’d expect from the prejudiced cousin).

The effect may be small, but this is a situation where the groups have been plucked out of the air by the experimenters. Every participant knows which group he or she is in, but they also know that they weren’t in this group before they started the experiment, that their assignment was arbitrary or completely random, and that the groups aren’t going to exist in any meaningful way after the experiment. They also know that their choices won’t directly affect them (they are explicitly told that they won’t be given any choices to make about themselves). Even so, this situation is enough to evoke favouritism.

So, it seems we’ll take the most minimal of signs as a cue to treat people differently according to which group they are in. Tajfel’s work suggests that in-group bias is as fundamental to thinking as the act of categorisations itself. If we want to contribute to a fairer world we need to be perpetually on guard to avoid letting this instinct run away with itself.

BBC Column: Why cyclists enrage car drivers

Here is my latest BBC Future column. The original is here. This one proved to be more than usually controversial, not least because of some poorly chosen phrasing from yours truly. This is an updated version which makes what I’m trying to say clearer. If you think that I hate cyclists, or my argument relies on the facts of actual law breaking (by cyclists or drivers), or that I am making a claim about the way the world ought to be (rather than how people see it), then please check out this clarification I published on my personal blog after a few days of feedback from the column. One thing the experience has convinced me of is that cycling is a very emotional issue, and one people often interpret in very moral terms.

It’s not simply because they are annoying, argues Tom Stafford, it’s because they trigger a deep-seated rage within us by breaking the moral order of the road.


Something about cyclists seems to provoke fury in other road users. If you doubt this, try a search for the word “cyclist” on Twitter. As I write this one of the latest tweets is this: “Had enough of cyclists today! Just wanna ram them with my car.” This kind of sentiment would get people locked up if directed against an ethnic minority or religion, but it seems to be fair game, in many people’s minds, when directed against cyclists. Why all the rage?

I’ve got a theory, of course. It’s not because cyclists are annoying. It isn’t even because we have a selective memory for that one stand-out annoying cyclist over the hundreds of boring, non-annoying ones (although that probably is a factor). No, my theory is that motorists hate cyclists because they offend the moral order.

Driving is a very moral activity – there are rules of the road, both legal and informal, and there are good and bad drivers. The whole intricate dance of the rush-hour junction only works because everybody knows the rules and follows them: keeping in lane; indicating properly; first her turn, now mine, now yours. Then along come cyclists, innocently following what they see as the rules of the road, but doing things that drivers aren’t allowed to: overtaking queues of cars, moving at well below the speed limit or undertaking on the inside.

You could argue that driving is like so much of social life, it’s a game of coordination where we have to rely on each other to do the right thing. And like all games, there’s an incentive to cheat. If everyone else is taking their turn, you can jump the queue. If everyone else is paying their taxes you can dodge them, and you’ll still get all the benefits of roads and police.

In economics and evolution this is known as the “free rider problem”; if you create a common benefit  – like taxes or orderly roads – what’s to stop some people reaping the benefit without paying their dues? The free rider problem creates a paradox for those who study evolution, because in a world of selfish genes it appears to make cooperation unlikely. Even if a bunch of selfish individuals (or genes) recognise the benefit of coming together to co-operate with each other, once the collective good has been created it is rational, in a sense, for everyone to start trying to freeload off the collective. This makes any cooperation prone to collapse. In small societies you can rely on cooperating with your friends, or kin, but as a society grows the problem of free-riding looms larger and larger.

Social collapse

Humans seem to have evolved one way of enforcing order onto potentially chaotic social arrangements. This is known as “altruistic punishment”, a term used by Ernst Fehr and Simon Gachter in a landmark paper published in 2002 [4]. An altruistic punishment is a punishment that costs you as an individual, but doesn’t bring any direct benefit. As an example, imagine I’m at a football match and I see someone climb in without buying a ticket. I could sit and enjoy the game (at no cost to myself), or I could try to find security to have the guy thrown out (at the cost of missing some of the game). That would be altruistic punishment.

Altruistic punishment, Fehr and Gachter reasoned, might just be the spark that makes groups of unrelated strangers co-operate. To test this they created a co-operation game played by constantly shifting groups of volunteers, who never meet – they played the game from a computer in a private booth. The volunteers played for real money, which they knew they would take away at the end of the experiment. On each round of the game each player received 20 credits, and could choose to contribute up to this amount to a group project. After everyone had chipped in (or not), everybody (regardless of investment) got 40% of the collective pot.

Under the rules of the game, the best collective outcome would be if everyone put in all their credits, and then each player would get back more than they put in. But the best outcome for each individual was to free ride – to keep their original 20 credits, and also get the 40% of what everybody else put in. Of course, if everybody did this then that would be 40% of nothing.

In this scenario what happened looked like a textbook case of the kind of social collapse the free rider problem warns of. On each successive turn of the game, the average amount contributed by players went down and down. Everybody realised that they could get the benefit of the collective pot without the cost of contributing. Even those who started out contributing a large proportion of their credits soon found out that not everybody else was doing the same. And once you see this it’s easy to stop chipping in yourself – nobody wants to be the sucker.

Rage against the machine

A simple addition to the rules reversed this collapse of co-operation, and that was the introduction of altruistic punishment. Fehr and Gachter allowed players to fine other players credits, at a cost to themselves. This is true altruistic punishment because the groups change after each round, and the players are anonymous. There may have been no direct benefit to fining other players, but players fined often and they fined hard – and, as you’d expect, they chose to fine other players who hadn’t chipped in on that round. The effect on cooperation was electric. With altruistic punishment, the average amount each player contributed rose and rose, instead of declining. The fine system allowed cooperation between groups of strangers who wouldn’t meet again, overcoming the challenge of the free rider problem.

How does this relate to why motorists hate cyclists? The key is in a detail from that classic 2002 paper. Did the players in this game sit there calmly calculating the odds, running game theory scenarios in their heads and reasoning about cost/benefit ratios? No, that wasn’t the immediate reason people fined players. They dished out fines because they were mad as hell. Fehr and Gachter, like the good behavioural experimenters they are, made sure to measure exactly how mad that was, by asking players to rate their anger on a scale of one to seven in reaction to various scenarios. When players were confronted with a free-rider, almost everyone put themselves at the upper end of the anger scale. Fehr and Gachter describe these emotions as a “proximate mechanism”. This means that evolution has built into the human mind a hatred of free-riders and cheaters, which activates anger when we confront people acting like this – and it is this anger which prompts altruistic punishment. In this way, the emotion is evolution’s way of getting us to overcome our short-term self-interest and encourage collective social life.

So now we can see why there is an evolutionary pressure pushing motorists towards hatred of cyclists. Deep within the human psyche, fostered there because it helps us co-ordinate with strangers and so build the global society that is a hallmark of our species, is an anger at people who break the rules, who take the benefits without contributing to the cost. And cyclists trigger this anger when they use the roads but don’t follow the same rules as cars.

Now cyclists reading this might think “but the rules aren’t made for us – we’re more vulnerable, discriminated against, we shouldn’t have to follow the rules.” Perhaps true, but irrelevant when other road-users see you breaking rules they have to keep. Maybe the solution is to educate drivers that cyclists are playing an important role in a wider game of reducing traffic and pollution. Or maybe we should just all take it out on a more important class of free-riders, the tax-dodgers.

BBC Column: Are we naturally good or bad?

My BBC Future column from last week. The original is here. I started out trying to write about research using economic games with apes and monkeys but I got so bogged down in the literature I switched to this neat experiment instead. Ed Yong is a better man than me and wrote a brilliant piece about that research, which you can find here.

It’s a question humanity has repeatedly asked itself, and one way to find out is to take a closer look at the behaviour of babies.… and use puppets.

Fundamentally speaking, are humans good or bad? It’s a question that has repeatedly been asked throughout humanity. For thousands of years, philosophers have debated whether we have a basically good nature that is corrupted by society, or a basically bad nature that is kept in check by society. Psychology has uncovered some evidence which might give the old debate a twist.

One way of asking about our most fundamental characteristics is to look at babies. Babies’ minds are a wonderful showcase for human nature. Babies are humans with the absolute minimum of cultural influence – they don’t have many friends, have never been to school and haven’t read any books. They can’t even control their own bowels, let alone speak the language, so their minds are as close to innocent as a human mind can get.

The only problem is that the lack of language makes it tricky to gauge their opinions. Normally we ask people to take part in experiments, giving them instructions or asking them to answer questions, both of which require language. Babies may be cuter to work with, but they are not known for their obedience. What’s a curious psychologist to do?

Fortunately, you don’t necessarily have to speak to reveal your opinions. Babies will reach for things they want or like, and they will tend to look longer at things that surprise them. Ingenious experiments carried out at Yale University in the US used these measures to look at babies’ minds. Their results suggest that even the youngest humans have a sense of right and wrong, and, furthermore, an instinct to prefer good over evil.

How could the experiments tell this? Imagine you are a baby. Since you have a short attention span, the experiment will be shorter and loads more fun than most psychology experiments. It was basically a kind of puppet show; the stage a scene featuring a bright green hill, and the puppets were cut-out shapes with stick on wobbly eyes; a triangle, a square and a circle, each in their own bright colours. What happened next was a short play, as one of the shapes tried to climb the hill, struggling up and falling back down again. Next, the other two shapes got involved, with either one helping the climber up the hill, by pushing up from behind, or the other hindering the climber, by pushing back from above.

Already something amazing, psychologically, is going on here. All humans are able to interpret the events in the play in terms of the story I’ve described. The puppets are just shapes. They don’t make human sounds or display human emotions. They just move about, and yet everyone reads these movements as purposeful, and revealing of their characters. You can argue that this “mind reading”, even in infants, shows that it is part of our human nature to believe in other minds.

Great expectations

What happened next tells us even more about human nature. After the show, infants were given the choice of reaching for either the helping or the hindering shape, and it turned out they were much more likely to reach for the helper. This can be explained if they are reading the events of the show in terms of motivations – the shapes aren’t just moving at random, but they showed to the infant that the shape pushing uphill “wants” to help out (and so is nice) and the shape pushing downhill “wants” to cause problems (and so is nasty).

The researchers used an encore to confirm these results. Infants saw a second scene in which the climber shape made a choice to move towards either the helper shape or the hinderer shape. The time infants spent looking in each of the two cases revealed what they thought of the outcome. If the climber moved towards the hinderer the infants looked significantly longer than if the climber moved towards the helper. This makes sense if the infants were surprised when the climber approached the hinderer. Moving towards the helper shape would be the happy ending, and obviously it was what the infant expected. If the climber moved towards the hinderer it was a surprise, as much as you or I would be surprised if we saw someone give a hug to a man who had just knocked him over.

The way to make sense of this result is if infants, with their pre-cultural brains had expectations about how people should act. Not only do they interpret the movement of the shapes as resulting from motivations, but they prefer helping motivations over hindering ones.

This doesn’t settle the debate over human nature. A cynic would say that it just shows that infants are self-interested and expect others to be the same way. At a minimum though, it shows that tightly bound into the nature of our developing minds is the ability to make sense of the world in terms of motivations, and a basic instinct to prefer friendly intentions over malicious ones. It is on this foundation that adult morality is built.