What’s the evidence on using rational argument to change people’s minds?

Contributoria is an experiment in community funded, collaborative journalism. What that means is that you can propose an article you’d like to write, and back proposals by others that you’d like to see written. There’s an article I’d like to write: What’s the evidence on using rational argument to change people’s minds?. Here’s something from the proposal:

Is it true that “you can’t tell anybody anything”? From pub arguments to ideology-driven party political disputes it can sometimes seem like people have their minds all made up, that there’s no point trying to persuade anybody of anything. Popular psychology books reinforce the idea that we’re emotional, irrational creatures (Dan Ariely “Predictably irrational”, David McRaney “You Are Not So Smart”). This piece will be 2000 words on the evidence from psychological science about persuasion by rational argument.

If the proposal is backed it will give me a chance to look at the evidence on things like the , on whether political extremism is supported by an illusion of explanatory depth (and how that can be corrected), and on how we treat all those social psychology priming experiments which suggest that our opinions on things can be pushed about by irrelevant factors such as the weight of a clipboard we’re holding.

All you need to do to back proposals, currently, is sign up for the site. You can see all current proposals here. Written articles are Creative Commons licensed.

Back the proposal: What’s the evidence on using rational argument to change people’s minds?

Full disclosure: I’ll be paid by Contributoria if the proposal is backed

Update: Backed! That was quick! Much thanks mindhacks.com readers! I’d better get reading and writing now…

Why Christmas rituals make tasty food

All of us carry out rituals in our daily lives, whether it is shaking hands or clinking glasses before we drink. At this time of year, the performance of customs and traditions is widespread – from sharing crackers, to pulling the wishbone on the turkey and lighting the Christmas pudding.

These rituals might seem like light-hearted traditions, but I’m going to try and persuade you that they are echoes of our evolutionary history, something which can tell us about how humans came to relate to each other before we had language. And the story starts by exploring how rituals can make our food much tastier.

In recent years, studies have suggested that performing small rituals can influence people’s enjoyment of what they eat. In one experiment, Kathleen Vohs from the University of Minnesota and colleagues explored how ritual affected people’s experience of eating a chocolate bar. Half of the people in the study were instructed to relax for a moment and then eat the chocolate bar as they normally would. The other half were given a simple ritual to perform, which involved breaking the chocolate bar in half while it was still inside its wrapper, and then unwrapping each half and eating it in turn.

Something about carefully following these instructions before eating the chocolate bar had a dramatic effect. People who had focused on the ritual said they enjoyed eating the chocolate more, rating the experience 15% higher than the control group. They also spent longer eating the chocolate, savouring the flavour for 50% longer than the control group. Perhaps most persuasively, they also said they would pay almost twice as much for such a chocolate.

This experiment shows that a small act can significantly increase the value we get from a simple food experience. Vohs and colleagues went on to test the next obvious question – how exactly do rituals work this magic? Repeating the experiment, they asked participants to describe and rate the act of eating the chocolate bar. Was it fun? Boring? Interesting? This seemed to be a critical variable – those participants who were made to perform the ritual rated the experience as more fun, less boring and more interesting. Statistical analysis showed that this was the reason they enjoyed the chocolate more, and were more willing to pay extra.

So, rituals appear to make people pay attention to what they are doing, allowing them to concentrate their minds on the positives of a simple pleasure. But could there be more to rituals? Given that they appear in many realms of life that have nothing to do with food –from religious services to presidential inaugurations – could their performance have deeper roots in our evolutionary history? Attempting to answer the question takes us beyond the research I’ve been discussing so far and into the complex and controversial debate about the evolution of human nature.

In his book, The Symbolic Species, Terrance Deacon claims that ritual played a special role in human evolution, in particular, at the transition point where we began to acquire the building blocks of language. Deacon’s argument is that the very first “symbols” we used to communicate, the things that became the roots of human language, can’t have been anything like the words we use so easily and thoughtlessly today. He argues that these first symbols would have been made up of extended, effortful and complex sequences of behaviours performed in a group – in other words, rituals. These symbols were needed because of the way early humans arranged their family groups and, in particular, shared the products of hunting. Early humans needed a way to tell each other who had what responsibilities and which privileges; who was part of the family, and who could share the food, for instance. These ideas are particularly hard to refer to by pointing. Rituals, says Deacon, were the evolutionary answer to the conundrum of connecting human groups and checking they had a shared understanding of how the group worked.

If you buy this evolutionary story – and plenty don’t – it gives you a way to understand why exactly our minds might have a weakness for ritual. A small ritual makes food more enjoyable, but why does it have that effect? Deacon’s answer is that our love of rituals evolved with our need to share food. Early humans who enjoyed rituals had more offspring. I speculate that an easy shortcut for evolution to find to make us enjoy rituals is by connecting our minds to that the rituals make the food more enjoyable.

So, for those sitting down with family this holiday, don’t skip the traditional rituals – sing the songs, pull the crackers, clink the glasses and listen to Uncle Vinnie repeat his funny anecdotes for the hundredth time. The rituals will help you enjoy the food more, and carry with them an echo of our long history as a species, and all the feasts the tribe shared before there even was Christmas.

This is my latest column for BBC Future. You can see the original here. Merry Christmas y’all!

Why the stupid think they’re smart

Psychologists have shown humans are poor judges of their own abilities, from sense of humour to grammar. Those worst at it are the worst judges of all.

You’re pretty smart right? Clever, and funny too. Of course you are, just like me. But wouldn’t it be terrible if we were mistaken? Psychologists have shown that we are more likely to be blind to our own failings than perhaps we realise. This could explain why some incompetent people are so annoying, and also inject a healthy dose of humility into our own sense of self-regard.

In 1999, Justin Kruger and David Dunning, from Cornell University, New York, tested whether people who lack the skills or abilities for something are also more likely to lack awareness of their lack of ability. At the start of their research paper they cite a Pittsburgh bank robber called McArthur Wheeler as an example, who was arrested in 1995 shortly after robbing two banks in broad daylight without wearing a mask or any other kind of disguise. When police showed him the security camera footage, he protested “But I wore the juice”. The hapless criminal believed that if you rubbed your face with lemon juice you would be invisible to security cameras.

Kruger and Dunning were interested in testing another kind of laughing matter. They asked professional comedians to rate 30 jokes for funniness. Then, 65 undergraduates were asked to rate the jokes too, and then ranked according to how well their judgements matched those of the professionals. They were also asked how well they thought they had done compared to the average person.

As you might expect, most people thought their ability to tell what was funny was above average. The results were, however, most interesting when split according to how well participants performed. Those slightly above average in their ability to rate jokes were highly accurate in their self-assessment, while those who actually did the best tended to think they were only slightly above average. Participants who were least able to judge what was funny (at least according to the professional comics) were also least able to accurately assess their own ability.

This finding was not a quirk of trying to measure subjective sense of humour. The researchers repeated the experiment, only this time with tests of logical reasoning and grammar. These disciplines have defined answers, and in each case they found the same pattern: those people who performed the worst were also the worst in estimating their own aptitude. In all three studies, those whose performance put them in the lowest quarter massively overestimated their own abilities by rating themselves as above average.

It didn’t even help the poor performers to be given a benchmark. In a later study, the most incompetent participants still failed to realise they were bottom of the pack even when given feedback on the performance of others.

Kruger and Dunning’s interpretation is that accurately assessing skill level relies on some of the same core abilities as actually performing that skill, so the least competent suffer a double deficit. Not only are they incompetent, but they lack the mental tools to judge their own incompetence.

In a key final test, Kruger and Dunning trained a group of poor performers in logical reasoning tasks. This improved participants’ self-assessments, suggesting that ability levels really did influence self-awareness.

Other research has shown that this “unskilled and unaware of it” effect holds in real-life situations, not just in abstract laboratory tests. For example, hunters who know the least about firearms also have the most inaccurate view of their firearm knowledge, and doctors with the worst patient-interviewing skills are the least likely to recognise their inadequacies.

What has become known as the Dunning-Kruger effect is an example of what psychologists call metacognition – thinking about thinking. It’s also something that should give us all pause for thought. The effect might just explain the apparently baffling self belief of some of your friends and colleagues. But before you start getting too smug, just remember one thing. As unlikely as you might think it is, you too could be walking around blissfully ignorant of your ignorance.

This is my BBC Future column from last week. The original is here.

Does studying economics make you more selfish?

When economics students learn about what makes fellow humans tick it affects the way they treat others. Not necessarily in a good way, as Tom Stafford explains.

Studying human behaviour can be like a dog trying to catch its own tail. As we learn more about ourselves, our new beliefs change how we behave. Research on economics students showed this in action: textbooks describing facts and theories about human behaviour can affect the people studying them.

Economic models are often based on an imaginary character called the rational actor, who, with no messy and complex inner world, relentlessly pursues a set of desires ranked according to the costs and benefits. Rational actors help create simple models of economies and societies. According to rational choice theory, some of the predictions governing these hypothetical worlds are common sense: people should prefer more to less, firms should only do things that make a profit and, if the price is right, you should be prepared to give up anything you own.

Another tool used to help us understand our motivations and actions is game theory, which examines how you make choices when their outcomes are affected by the choices of others. To determine which of a number of options to go for, you need a theory about what the other person will do (and your theory needs to encompass the other person’s theory about what you will do, and so on). Rational actor theory says other players in the game all want the best outcome for themselves, and that they will assume the same about you.

The most famous game in game theory is the “prisoner’s dilemma”, in which you are one of a pair of criminals arrested and held in separate cells. The police make you this offer: you can inform on your partner, in which case you either get off scot free (if your partner keeps quiet), or you both get a few years in prison (if he informs on you too). Alternatively you can keep quiet, in which case you either get a few years (if your partner also keeps quiet), or you get a long sentence (if he informs on you, leading to him getting off scot free). Your partner, of course, faces exactly the same choice.

If you’re a rational actor, it’s an easy decision. You should inform on your partner in crime because if he keeps quiet, you go free, and if he informs on you, both of you go to prison, but the sentence will be either the same length or shorter than if you keep quiet.

Weirdly, and thankfully, this isn’t what happens if you ask real people to play the prisoner’s dilemma. Around the world, in most societies, most people maintain the criminals’ pact of silence. The exceptions who opt to act solely in their own interests are known in economics as “free riders” – individuals who take benefits without paying costs.

Self(ish)-selecting group

The prisoner’s dilemma is a theoretical tool, but there are plenty of parallel choices – and free riders – in the real world. People who are always late for appointments with others don’t have to hurry or wait for others. Some use roads and hospitals without paying their taxes. There are lots of interesting reasons why most of us turn up on time and don’t avoid paying taxes, even though these might be the selfish “rational” choices according to most economic models.

Crucially, rational actor theory appears more useful for predicting the actions of certain groups of people. One group who have been found to free ride more than others in repeated studies is people who have studied economics. In a study published in 1993, Robert Frank and colleagues from Cornell University, in Ithaca, New York State, tested this idea with a version of the prisoner’s dilemma game. Economics students “informed on” other players 60% of the time, while those studying other subjects did so 39% of the time. Men have previously been found to be more self-interested in such tests, and more men study economics than women. However even after controlling for this sex difference, Frank found economics students were 17% more likely to take the selfish route when playing the prisoner’s dilemma.

In good news for educators everywhere, the team found that the longer students had been at university, the higher their rates of cooperation. In other words, higher education (or simple growing up), seemed to make people more likely to put their faith in human co-operation. The economists again proved to be the exception. For them extra years of study did nothing to undermine their selfish rationality.

Frank’s group then went on to carry out surveys on whether students would return money they had found or report being undercharged, both at the start and end of their courses. Economics students were more likely to see themselves and others as more self-interested following their studies than a control group studying astronomy. This was especially true among those studying under a tutor who taught game theory and focused on notions of survival imperatives militating against co-operation.

Subsequent work has questioned these findings, suggesting that selfish people are just more likely to study economics, and that Frank’s surveys and games tell us little about real-world moral behaviour. It is true that what individuals do in the highly artificial situation of being presented with the prisoner’s dilemma doesn’t necessarily tell us how they will behave in more complex real-world situations.

In related work, Eric Schwitzgebel has shown that students and teachers of ethical philosophy don’t seem to behave more ethically when their behaviour is assessed using a range of real-world variables. Perhaps, says Schwitzgebel, we shouldn’t be surprised that economics students who have been taught about the prisoner’s dilemma, act in line with what they’ve been taught when tested in a classroom. Again, this is a long way from showing any influence on real world behaviour, some argue.

The lessons of what people do in tests and games are limited because of the additional complexities involved in real-world moral choices with real and important consequences. Yet I hesitate to dismiss the results of these experiments. We shouldn’t leap to conclusions based on the few simple experiments that have been done, but if we tell students that it makes sense to see the world through the eyes of the selfish rational actor, my suspicion is that they are more likely to do so.

Multiple factors influence our behaviour, of which formal education is just one. Economics and economic opinions are also prominent throughout the news media, for instance. But what the experiments above demonstrate, in one small way at least, is that what we are taught about human behaviour can alter it.

This is my column from BBC Future last week. You can see the original here. Thanks to Eric for some references and comments on this topic.

The effect of diminished belief in free will

Studies have shown that people who believe things happen randomly and not through our own choice often behave much worse than those who believe the opposite.

Are you reading this because you chose to? Or are you doing so as a result of forces beyond your control?

After thousands of years of philosophy, theology, argument and meditation on the riddle of free will, I’m not about to solve it for you in this column (sorry). But what I can do is tell you about some thought-provoking experiments by psychologists, which suggest that, regardless of whether we have free will or not, whether we believe we do can have a profound impact on how we behave.

The issue is simple: we all make choices, but could those choices be made otherwise? From a religious perspective it might seem as if a divine being knows all, including knowing in advance what you will choose (so your choices could not be otherwise). Or we can take a physics-based perspective. Everything in the universe has physical causes, and as you are part of the universe, your choices must be caused (so your choices could not be otherwise). In either case, our experience of choosing collides with our faith in a world which makes sense because things have causes.

Consider for a moment how you would research whether a belief in free will affects our behaviour. There’s no point comparing the behaviour of people with different fixed philosophical perspectives. You might find that determinists, who believe free will is an illusion and that we are all cogs in a godless universe, behave worse than those who believe we are free to make choices. But you wouldn’t know whether this was simply because people who like to cheat and lie become determinists (the “Yes, I lied, but I couldn’t help it” excuse).

What we really need is a way of changing people’s beliefs about free will, so that we can track the effects of doing so on their behaviour. Fortunately, in recent years researchers have developed a standard method of doing this. It involves asking subjects to read sections from Francis Crick’s book The Astonishing Hypothesis. Crick was one of the co-discoverers of DNA’s double-helix structure, for which he was awarded the Nobel prize. Later in his career he left molecular biology and devoted himself to neuroscience. The hypothesis in question is his belief that our mental life is entirely generated by the physical stuff of the brain. One passage states that neuroscience has killed the idea of free will, an idea that most rational people, including most scientists, now believe is an illusion.

Psychologists have used this section of the book, or sentences taken from it or inspired by it, to induce feelings of determinism in experimental subjects. A typical study asks people to read and think about a series of sentences such as “Science has demonstrated that free will is an illusion”, or “Like everything else in the universe, all human actions follow from prior events and ultimately can be understood in terms of the movement of molecules”.

The effects on study participants are generally compared with those of other people asked to read sentences that assert the existence of free will, such as “I have feelings of regret when I make bad decisions because I know that ultimately I am responsible for my actions”, or texts on topics unrelated to free will.

And the results are striking. One study reported that participants who had their belief in free will diminished were more likely to cheat in a maths test. In another, US psychologists reported that people who read Crick’s thoughts on free will said they were less likely to help others.

Bad taste

A follow-up to this study used an ingenious method to test this via aggression to strangers. Participants were told a cover story about helping the experimenter prepare food for a taste test to be taken by a stranger. They were given the results of a supposed food preference questionnaire which indicated that the stranger liked most foods but hated hot food. Participants were also given a jar of hot sauce. The critical measure was how much of the sauce they put into the taste-test food. Putting in less sauce, when they knew that the taster didn’t like hot food, meant they scored more highly for what psychologists call “prosociality”, or what everyone else calls being nice.

You’ve guessed it: Participants who had been reading about how they didn’t have any free will chose to give more hot sauce to the poor fictional taster – twice as much, in fact, as those who read sentences supporting the idea of freedom of choice and responsibility.

In a recent study carried out at the University of Padova, Italy, researchers recorded the brain activity of participants who had been told to press a button whenever they wanted. This showed that people whose belief in free will had taken a battering thanks to reading Crick’s views showed a weaker signal in areas of the brain involved in preparing to move. In another study by the same team, volunteers carried out a series of on-screen tasks designed to test their reaction times, self control and judgement. Those told free will didn’t exist were slower, and more likely to go for easier and more automatic courses of action.

This is a young research area. We still need to check that individual results hold up, but taken all together these studies show that our belief in free will isn’t just a philosophical abstraction. We are less likely to behave ethically and kindly if our belief in free will is diminished.

This puts an extra burden of responsibility on philosophers, scientists, pundits and journalists who use evidence from psychology or neuroscience experiments to argue that free will is an illusion. We need to be careful about what stories we tell, given what we know about the likely consequences.

Fortunately, the evidence shows that most people have a sense of their individual freedom and responsibility that is resistant to being overturned by neuroscience. Those sentences from Crick’s book claim that most scientists believe free will to be an illusion. My guess is that most scientists would want to define what exactly is meant by free will, and to examine the various versions of free will on offer, before they agree whether it is an illusion or not.

If the last few thousands of years have taught us anything, the debate about free will may rumble on and on. But whether the outcome is inevitable or not, these results show that how we think about the way we think could have a profound effect on us, and on others.

This was published on BBC Future last week. See the original, ‘Does non-belief in free will make us better or worse?‘ (it is identical apart from the title, and there’s a nice picture on that site). If the neuroscience and the free will debate floats your boat, you can check out this video of the Sheffield Salon on the topic “‘My Brain Made Me Do It’ – have neuroscience and evolutionary psychology put free will on the slab?“. I’m the one on the left.

A war of biases

Here’s an interesting take on terrorism as a fundamentally audience-focused activity that relies on causing fear to achieve political ends and whether citizen-led community monitoring schemes actually serve to amplify the effects rather than make us feel safer.

It’s from an article just published in Journal of Police and Criminal Psychology by political scientist Alex Braithwaite:

A long-held premise in the literature on terrorism is that the provocation of a sense of fear within a mass population is the mechanism linking motivations for the use of violence with the anticipated outcome of policy change. This assumption is the pivot point upon and around which most theories of terrorism rest and revolve. Martha Crenshaw, for instance, claims, the ‘political effectiveness of terrorism is importantly determined by the psychological effects of violence on audiences’…

Terrorists prioritize communication of an exaggerated sense of their ability to do harm. They do this by attempting to convince the population that their government is unable to protect them. It follows, then, that any attempt at improving security policy ought to center upon gaining a better understanding of the factors that affect public perceptions of security.

States with at least minimal historical experience of terrorism typically implore their citizens to participate actively in the task of monitoring streets, buildings, transportation, and task them with reporting suspicious activities and behaviors… I argue that if there is evidence to suggest that such approaches meaningfully improve state security this evidence is not widely available and that, moreover, such approaches are likely to exacerbate rather than alleviate public fear.

In the article, Braithwaite presents evidence that terrorist attacks genuinely do exaggerate our fear of danger by examining opinion polls close to terrorist attacks.

For example, after 9/11 a Gallup poll found that 66% of Americans reported believing that “further acts of terrorism are somewhat or very likely in the coming weeks” while 56% “worried that they or a member of their family will become victim of a terrorist attack”.

With regard to community monitoring and reporting schemes (‘Call us if you see anything suspicious in your neighbourhood’) Braithwaite notes that there is no solid evidence that they make us physically safer. But unfortunately, there isn’t any hard evidence to suggest that they make us more fearful either.

In fact, you could just as easily argue that even if they are useless, they might build confidence due to the illusion of control where we feel like we are having an effect on external events simply because we are participating.

It may be, of course, that authorities don’t publish the effectiveness figures for community monitoring schemes because even if they do genuinely make a difference, terrorists might have the same difficulty as the public and over-estimate their effectiveness.

Perhaps the war on terror is being fought with cognitive biases.

Link to locked academic article on fear and terrorism.

Why the other queue always seem to move faster than yours

Whether it is supermarkets or traffic, there are two possible explanations for why you feel the world is against you, explains Tom Stafford.

Sometimes I feel like the whole world is against me. The other lanes of traffic always move faster than mine. The same goes for the supermarket queues. While I’m at it, why does it always rain on those occasions I don’t carry an umbrella, and why do wasps always want to eat my sandwiches at a picnic and not other people’s?

It feels like there are only two reasonable explanations. Either the universe itself has a vendetta against me, or some kind of psychological bias is creating a powerful – but mistaken – impression that I get more bad luck than I should. I know this second option sounds crazy, but let’s just explore this for a moment before we get back to the universe-victim theory.

My impressions of victimisation are based on judgements of probability. Either I am making a judgement of causality (forgetting an umbrella makes it rain) or a judgement of association (wasps prefer the taste of my sandwiches to other people’s sandwiches). Fortunately, psychologists know a lot about how we form impressions of causality and association, and it isn’t all good news.

Our ability to think about causes and associations is fundamentally important, and always has been for our evolutionary ancestors – we needed to know if a particular berry makes us sick, or if a particular cloud pattern predicts bad weather. So it isn’t surprising that we automatically make judgements of this kind. We don’t have to mentally count events, tally correlations and systematically discount alternative explanations. We have strong intuitions about what things go together, intuitions that just spring to mind, often after very little experience. This is good for making decisions in a world where you often don’t have enough time to think before you act, but with the side-effect that these intuitions contain some predictable errors.

One such error is what’s called “illusory correlation”, a phenomenon whereby two things that are individually salient seem to be associated when they are not. In a classic experiment volunteers were asked to look through psychiatrists’ fabricated case reports of patients who had responded to the Rorschach ink blot test. Some of the case reports noted that the patients were homosexual, and some noted that they saw things such as women’s clothes, or buttocks in the ink blots. The case reports had been prepared so that there was no reliable association between the patient notes and the ink blot responses, but experiment participants – whether trained or untrained in psychiatry – reported strong (but incorrect) associations between some ink blot signs and patient homosexuality.

One explanation is that things that are relatively uncommon, such as homosexuality in this case, and the ink blot responses which contain mention of women’s clothes, are more vivid (because of their rarity). This, and an effect of existing stereotypes, creates a mistaken impression that the two things are associated when they are not. This is a side effect of an intuitive mental machinery for reasoning about the world. Most of the time it is quick and delivers reliable answers – but it seems to be susceptible to error when dealing with rare but vivid events, particularly where preconceived biases operate. Associating bad traffic behaviour with ethnic minority drivers, or cyclists, is another case where people report correlations that just aren’t there. Both the minority (either an ethnic minority, or the cyclists) and bad behaviour stand out. Our quick-but-dirty inferential machinery leaps to the conclusion that the events are commonly associated, when they aren’t.

So here we have a mechanism which might explain my queuing woes. The other lanes or queues moving faster is one salient event, and my intuition wrongly associates it with the most salient thing in my environment – me. What, after all, is more important to my world than me. Which brings me back to the universe-victim theory. When my lane is moving along I’m focusing on where I’m going, ignoring the traffic I’m overtaking. When my lane is stuck I’m thinking about me and my hard luck, looking at the other lane. No wonder the association between me and being overtaken sticks in memory more.

This distorting influence of memory on our judgements lies behind a good chunk of my feelings of victimisation. In some situations there is a real bias. You really do spend more time being overtaken in traffic than you do overtaking, for example, because the overtaking happens faster. And the smoke really does tend follow you around the campfire, because wherever you sit creates a warm up-draught that the smoke fills. But on top of all of these is a mind that over-exaggerates our own importance, giving each of us the false impression that we are more important in how events work out than we really are.

This is my BBC Future post from last Tuesday. The original is here.