Why the stupid think they’re smart

Psychologists have shown humans are poor judges of their own abilities, from sense of humour to grammar. Those worst at it are the worst judges of all.

You’re pretty smart right? Clever, and funny too. Of course you are, just like me. But wouldn’t it be terrible if we were mistaken? Psychologists have shown that we are more likely to be blind to our own failings than perhaps we realise. This could explain why some incompetent people are so annoying, and also inject a healthy dose of humility into our own sense of self-regard.

In 1999, Justin Kruger and David Dunning, from Cornell University, New York, tested whether people who lack the skills or abilities for something are also more likely to lack awareness of their lack of ability. At the start of their research paper they cite a Pittsburgh bank robber called McArthur Wheeler as an example, who was arrested in 1995 shortly after robbing two banks in broad daylight without wearing a mask or any other kind of disguise. When police showed him the security camera footage, he protested “But I wore the juice”. The hapless criminal believed that if you rubbed your face with lemon juice you would be invisible to security cameras.

Kruger and Dunning were interested in testing another kind of laughing matter. They asked professional comedians to rate 30 jokes for funniness. Then, 65 undergraduates were asked to rate the jokes too, and then ranked according to how well their judgements matched those of the professionals. They were also asked how well they thought they had done compared to the average person.

As you might expect, most people thought their ability to tell what was funny was above average. The results were, however, most interesting when split according to how well participants performed. Those slightly above average in their ability to rate jokes were highly accurate in their self-assessment, while those who actually did the best tended to think they were only slightly above average. Participants who were least able to judge what was funny (at least according to the professional comics) were also least able to accurately assess their own ability.

This finding was not a quirk of trying to measure subjective sense of humour. The researchers repeated the experiment, only this time with tests of logical reasoning and grammar. These disciplines have defined answers, and in each case they found the same pattern: those people who performed the worst were also the worst in estimating their own aptitude. In all three studies, those whose performance put them in the lowest quarter massively overestimated their own abilities by rating themselves as above average.

It didn’t even help the poor performers to be given a benchmark. In a later study, the most incompetent participants still failed to realise they were bottom of the pack even when given feedback on the performance of others.

Kruger and Dunning’s interpretation is that accurately assessing skill level relies on some of the same core abilities as actually performing that skill, so the least competent suffer a double deficit. Not only are they incompetent, but they lack the mental tools to judge their own incompetence.

In a key final test, Kruger and Dunning trained a group of poor performers in logical reasoning tasks. This improved participants’ self-assessments, suggesting that ability levels really did influence self-awareness.

Other research has shown that this “unskilled and unaware of it” effect holds in real-life situations, not just in abstract laboratory tests. For example, hunters who know the least about firearms also have the most inaccurate view of their firearm knowledge, and doctors with the worst patient-interviewing skills are the least likely to recognise their inadequacies.

What has become known as the Dunning-Kruger effect is an example of what psychologists call metacognition – thinking about thinking. It’s also something that should give us all pause for thought. The effect might just explain the apparently baffling self belief of some of your friends and colleagues. But before you start getting too smug, just remember one thing. As unlikely as you might think it is, you too could be walking around blissfully ignorant of your ignorance.

This is my BBC Future column from last week. The original is here.

Does studying economics make you more selfish?

When economics students learn about what makes fellow humans tick it affects the way they treat others. Not necessarily in a good way, as Tom Stafford explains.

Studying human behaviour can be like a dog trying to catch its own tail. As we learn more about ourselves, our new beliefs change how we behave. Research on economics students showed this in action: textbooks describing facts and theories about human behaviour can affect the people studying them.

Economic models are often based on an imaginary character called the rational actor, who, with no messy and complex inner world, relentlessly pursues a set of desires ranked according to the costs and benefits. Rational actors help create simple models of economies and societies. According to rational choice theory, some of the predictions governing these hypothetical worlds are common sense: people should prefer more to less, firms should only do things that make a profit and, if the price is right, you should be prepared to give up anything you own.

Another tool used to help us understand our motivations and actions is game theory, which examines how you make choices when their outcomes are affected by the choices of others. To determine which of a number of options to go for, you need a theory about what the other person will do (and your theory needs to encompass the other person’s theory about what you will do, and so on). Rational actor theory says other players in the game all want the best outcome for themselves, and that they will assume the same about you.

The most famous game in game theory is the “prisoner’s dilemma”, in which you are one of a pair of criminals arrested and held in separate cells. The police make you this offer: you can inform on your partner, in which case you either get off scot free (if your partner keeps quiet), or you both get a few years in prison (if he informs on you too). Alternatively you can keep quiet, in which case you either get a few years (if your partner also keeps quiet), or you get a long sentence (if he informs on you, leading to him getting off scot free). Your partner, of course, faces exactly the same choice.

If you’re a rational actor, it’s an easy decision. You should inform on your partner in crime because if he keeps quiet, you go free, and if he informs on you, both of you go to prison, but the sentence will be either the same length or shorter than if you keep quiet.

Weirdly, and thankfully, this isn’t what happens if you ask real people to play the prisoner’s dilemma. Around the world, in most societies, most people maintain the criminals’ pact of silence. The exceptions who opt to act solely in their own interests are known in economics as “free riders” – individuals who take benefits without paying costs.

Self(ish)-selecting group

The prisoner’s dilemma is a theoretical tool, but there are plenty of parallel choices – and free riders – in the real world. People who are always late for appointments with others don’t have to hurry or wait for others. Some use roads and hospitals without paying their taxes. There are lots of interesting reasons why most of us turn up on time and don’t avoid paying taxes, even though these might be the selfish “rational” choices according to most economic models.

Crucially, rational actor theory appears more useful for predicting the actions of certain groups of people. One group who have been found to free ride more than others in repeated studies is people who have studied economics. In a study published in 1993, Robert Frank and colleagues from Cornell University, in Ithaca, New York State, tested this idea with a version of the prisoner’s dilemma game. Economics students “informed on” other players 60% of the time, while those studying other subjects did so 39% of the time. Men have previously been found to be more self-interested in such tests, and more men study economics than women. However even after controlling for this sex difference, Frank found economics students were 17% more likely to take the selfish route when playing the prisoner’s dilemma.

In good news for educators everywhere, the team found that the longer students had been at university, the higher their rates of cooperation. In other words, higher education (or simple growing up), seemed to make people more likely to put their faith in human co-operation. The economists again proved to be the exception. For them extra years of study did nothing to undermine their selfish rationality.

Frank’s group then went on to carry out surveys on whether students would return money they had found or report being undercharged, both at the start and end of their courses. Economics students were more likely to see themselves and others as more self-interested following their studies than a control group studying astronomy. This was especially true among those studying under a tutor who taught game theory and focused on notions of survival imperatives militating against co-operation.

Subsequent work has questioned these findings, suggesting that selfish people are just more likely to study economics, and that Frank’s surveys and games tell us little about real-world moral behaviour. It is true that what individuals do in the highly artificial situation of being presented with the prisoner’s dilemma doesn’t necessarily tell us how they will behave in more complex real-world situations.

In related work, Eric Schwitzgebel has shown that students and teachers of ethical philosophy don’t seem to behave more ethically when their behaviour is assessed using a range of real-world variables. Perhaps, says Schwitzgebel, we shouldn’t be surprised that economics students who have been taught about the prisoner’s dilemma, act in line with what they’ve been taught when tested in a classroom. Again, this is a long way from showing any influence on real world behaviour, some argue.

The lessons of what people do in tests and games are limited because of the additional complexities involved in real-world moral choices with real and important consequences. Yet I hesitate to dismiss the results of these experiments. We shouldn’t leap to conclusions based on the few simple experiments that have been done, but if we tell students that it makes sense to see the world through the eyes of the selfish rational actor, my suspicion is that they are more likely to do so.

Multiple factors influence our behaviour, of which formal education is just one. Economics and economic opinions are also prominent throughout the news media, for instance. But what the experiments above demonstrate, in one small way at least, is that what we are taught about human behaviour can alter it.

This is my column from BBC Future last week. You can see the original here. Thanks to Eric for some references and comments on this topic.

The effect of diminished belief in free will

Studies have shown that people who believe things happen randomly and not through our own choice often behave much worse than those who believe the opposite.

Are you reading this because you chose to? Or are you doing so as a result of forces beyond your control?

After thousands of years of philosophy, theology, argument and meditation on the riddle of free will, I’m not about to solve it for you in this column (sorry). But what I can do is tell you about some thought-provoking experiments by psychologists, which suggest that, regardless of whether we have free will or not, whether we believe we do can have a profound impact on how we behave.

The issue is simple: we all make choices, but could those choices be made otherwise? From a religious perspective it might seem as if a divine being knows all, including knowing in advance what you will choose (so your choices could not be otherwise). Or we can take a physics-based perspective. Everything in the universe has physical causes, and as you are part of the universe, your choices must be caused (so your choices could not be otherwise). In either case, our experience of choosing collides with our faith in a world which makes sense because things have causes.

Consider for a moment how you would research whether a belief in free will affects our behaviour. There’s no point comparing the behaviour of people with different fixed philosophical perspectives. You might find that determinists, who believe free will is an illusion and that we are all cogs in a godless universe, behave worse than those who believe we are free to make choices. But you wouldn’t know whether this was simply because people who like to cheat and lie become determinists (the “Yes, I lied, but I couldn’t help it” excuse).

What we really need is a way of changing people’s beliefs about free will, so that we can track the effects of doing so on their behaviour. Fortunately, in recent years researchers have developed a standard method of doing this. It involves asking subjects to read sections from Francis Crick’s book The Astonishing Hypothesis. Crick was one of the co-discoverers of DNA’s double-helix structure, for which he was awarded the Nobel prize. Later in his career he left molecular biology and devoted himself to neuroscience. The hypothesis in question is his belief that our mental life is entirely generated by the physical stuff of the brain. One passage states that neuroscience has killed the idea of free will, an idea that most rational people, including most scientists, now believe is an illusion.

Psychologists have used this section of the book, or sentences taken from it or inspired by it, to induce feelings of determinism in experimental subjects. A typical study asks people to read and think about a series of sentences such as “Science has demonstrated that free will is an illusion”, or “Like everything else in the universe, all human actions follow from prior events and ultimately can be understood in terms of the movement of molecules”.

The effects on study participants are generally compared with those of other people asked to read sentences that assert the existence of free will, such as “I have feelings of regret when I make bad decisions because I know that ultimately I am responsible for my actions”, or texts on topics unrelated to free will.

And the results are striking. One study reported that participants who had their belief in free will diminished were more likely to cheat in a maths test. In another, US psychologists reported that people who read Crick’s thoughts on free will said they were less likely to help others.

Bad taste

A follow-up to this study used an ingenious method to test this via aggression to strangers. Participants were told a cover story about helping the experimenter prepare food for a taste test to be taken by a stranger. They were given the results of a supposed food preference questionnaire which indicated that the stranger liked most foods but hated hot food. Participants were also given a jar of hot sauce. The critical measure was how much of the sauce they put into the taste-test food. Putting in less sauce, when they knew that the taster didn’t like hot food, meant they scored more highly for what psychologists call “prosociality”, or what everyone else calls being nice.

You’ve guessed it: Participants who had been reading about how they didn’t have any free will chose to give more hot sauce to the poor fictional taster – twice as much, in fact, as those who read sentences supporting the idea of freedom of choice and responsibility.

In a recent study carried out at the University of Padova, Italy, researchers recorded the brain activity of participants who had been told to press a button whenever they wanted. This showed that people whose belief in free will had taken a battering thanks to reading Crick’s views showed a weaker signal in areas of the brain involved in preparing to move. In another study by the same team, volunteers carried out a series of on-screen tasks designed to test their reaction times, self control and judgement. Those told free will didn’t exist were slower, and more likely to go for easier and more automatic courses of action.

This is a young research area. We still need to check that individual results hold up, but taken all together these studies show that our belief in free will isn’t just a philosophical abstraction. We are less likely to behave ethically and kindly if our belief in free will is diminished.

This puts an extra burden of responsibility on philosophers, scientists, pundits and journalists who use evidence from psychology or neuroscience experiments to argue that free will is an illusion. We need to be careful about what stories we tell, given what we know about the likely consequences.

Fortunately, the evidence shows that most people have a sense of their individual freedom and responsibility that is resistant to being overturned by neuroscience. Those sentences from Crick’s book claim that most scientists believe free will to be an illusion. My guess is that most scientists would want to define what exactly is meant by free will, and to examine the various versions of free will on offer, before they agree whether it is an illusion or not.

If the last few thousands of years have taught us anything, the debate about free will may rumble on and on. But whether the outcome is inevitable or not, these results show that how we think about the way we think could have a profound effect on us, and on others.

This was published on BBC Future last week. See the original, ‘Does non-belief in free will make us better or worse?‘ (it is identical apart from the title, and there’s a nice picture on that site). If the neuroscience and the free will debate floats your boat, you can check out this video of the Sheffield Salon on the topic “‘My Brain Made Me Do It’ – have neuroscience and evolutionary psychology put free will on the slab?“. I’m the one on the left.

A war of biases

Here’s an interesting take on terrorism as a fundamentally audience-focused activity that relies on causing fear to achieve political ends and whether citizen-led community monitoring schemes actually serve to amplify the effects rather than make us feel safer.

It’s from an article just published in Journal of Police and Criminal Psychology by political scientist Alex Braithwaite:

A long-held premise in the literature on terrorism is that the provocation of a sense of fear within a mass population is the mechanism linking motivations for the use of violence with the anticipated outcome of policy change. This assumption is the pivot point upon and around which most theories of terrorism rest and revolve. Martha Crenshaw, for instance, claims, the ‘political effectiveness of terrorism is importantly determined by the psychological effects of violence on audiences’…

Terrorists prioritize communication of an exaggerated sense of their ability to do harm. They do this by attempting to convince the population that their government is unable to protect them. It follows, then, that any attempt at improving security policy ought to center upon gaining a better understanding of the factors that affect public perceptions of security.

States with at least minimal historical experience of terrorism typically implore their citizens to participate actively in the task of monitoring streets, buildings, transportation, and task them with reporting suspicious activities and behaviors… I argue that if there is evidence to suggest that such approaches meaningfully improve state security this evidence is not widely available and that, moreover, such approaches are likely to exacerbate rather than alleviate public fear.

In the article, Braithwaite presents evidence that terrorist attacks genuinely do exaggerate our fear of danger by examining opinion polls close to terrorist attacks.

For example, after 9/11 a Gallup poll found that 66% of Americans reported believing that “further acts of terrorism are somewhat or very likely in the coming weeks” while 56% “worried that they or a member of their family will become victim of a terrorist attack”.

With regard to community monitoring and reporting schemes (‘Call us if you see anything suspicious in your neighbourhood’) Braithwaite notes that there is no solid evidence that they make us physically safer. But unfortunately, there isn’t any hard evidence to suggest that they make us more fearful either.

In fact, you could just as easily argue that even if they are useless, they might build confidence due to the illusion of control where we feel like we are having an effect on external events simply because we are participating.

It may be, of course, that authorities don’t publish the effectiveness figures for community monitoring schemes because even if they do genuinely make a difference, terrorists might have the same difficulty as the public and over-estimate their effectiveness.

Perhaps the war on terror is being fought with cognitive biases.
 

Link to locked academic article on fear and terrorism.

Why the other queue always seem to move faster than yours

Whether it is supermarkets or traffic, there are two possible explanations for why you feel the world is against you, explains Tom Stafford.

Sometimes I feel like the whole world is against me. The other lanes of traffic always move faster than mine. The same goes for the supermarket queues. While I’m at it, why does it always rain on those occasions I don’t carry an umbrella, and why do wasps always want to eat my sandwiches at a picnic and not other people’s?

It feels like there are only two reasonable explanations. Either the universe itself has a vendetta against me, or some kind of psychological bias is creating a powerful – but mistaken – impression that I get more bad luck than I should. I know this second option sounds crazy, but let’s just explore this for a moment before we get back to the universe-victim theory.

My impressions of victimisation are based on judgements of probability. Either I am making a judgement of causality (forgetting an umbrella makes it rain) or a judgement of association (wasps prefer the taste of my sandwiches to other people’s sandwiches). Fortunately, psychologists know a lot about how we form impressions of causality and association, and it isn’t all good news.

Our ability to think about causes and associations is fundamentally important, and always has been for our evolutionary ancestors – we needed to know if a particular berry makes us sick, or if a particular cloud pattern predicts bad weather. So it isn’t surprising that we automatically make judgements of this kind. We don’t have to mentally count events, tally correlations and systematically discount alternative explanations. We have strong intuitions about what things go together, intuitions that just spring to mind, often after very little experience. This is good for making decisions in a world where you often don’t have enough time to think before you act, but with the side-effect that these intuitions contain some predictable errors.

One such error is what’s called “illusory correlation”, a phenomenon whereby two things that are individually salient seem to be associated when they are not. In a classic experiment volunteers were asked to look through psychiatrists’ fabricated case reports of patients who had responded to the Rorschach ink blot test. Some of the case reports noted that the patients were homosexual, and some noted that they saw things such as women’s clothes, or buttocks in the ink blots. The case reports had been prepared so that there was no reliable association between the patient notes and the ink blot responses, but experiment participants – whether trained or untrained in psychiatry – reported strong (but incorrect) associations between some ink blot signs and patient homosexuality.

One explanation is that things that are relatively uncommon, such as homosexuality in this case, and the ink blot responses which contain mention of women’s clothes, are more vivid (because of their rarity). This, and an effect of existing stereotypes, creates a mistaken impression that the two things are associated when they are not. This is a side effect of an intuitive mental machinery for reasoning about the world. Most of the time it is quick and delivers reliable answers – but it seems to be susceptible to error when dealing with rare but vivid events, particularly where preconceived biases operate. Associating bad traffic behaviour with ethnic minority drivers, or cyclists, is another case where people report correlations that just aren’t there. Both the minority (either an ethnic minority, or the cyclists) and bad behaviour stand out. Our quick-but-dirty inferential machinery leaps to the conclusion that the events are commonly associated, when they aren’t.

So here we have a mechanism which might explain my queuing woes. The other lanes or queues moving faster is one salient event, and my intuition wrongly associates it with the most salient thing in my environment – me. What, after all, is more important to my world than me. Which brings me back to the universe-victim theory. When my lane is moving along I’m focusing on where I’m going, ignoring the traffic I’m overtaking. When my lane is stuck I’m thinking about me and my hard luck, looking at the other lane. No wonder the association between me and being overtaken sticks in memory more.

This distorting influence of memory on our judgements lies behind a good chunk of my feelings of victimisation. In some situations there is a real bias. You really do spend more time being overtaken in traffic than you do overtaking, for example, because the overtaking happens faster. And the smoke really does tend follow you around the campfire, because wherever you sit creates a warm up-draught that the smoke fills. But on top of all of these is a mind that over-exaggerates our own importance, giving each of us the false impression that we are more important in how events work out than we really are.

This is my BBC Future post from last Tuesday. The original is here.

Protect your head – the world is complex

The British Medical Journal has a fascinating editorial on the behavioural complexities behind the question of whether cycling helmets prevent head injuries.

You would think that testing whether helmets prevent bikers from head injury would be a fairly straightforward affair. Maybe putting a bike helmet on a crash test dummy and throwing rocks at its head. Or counting how many cyclists with head injuries were wearing head protection – but it turns out to be far more complicated.

The piece by epidemiologist Ben Goldacre and risk scientist David Spiegelhalter examines why the social and behavioural effects of wearing a helmet, or being required to wear one by law, can often outweigh the protective effects of having padding around your head.

People who are forced by legislation to wear a bicycle helmet, meanwhile, may be different again. Firstly, they may not wear the helmet correctly, seeking only to comply with the law and avoid a fine. Secondly, their behaviour may change as a consequence of wearing a helmet through “risk compensation,” a phenomenon that has been documented in many fields. One study — albeit with a single author and subject—suggests that drivers give larger clearance to cyclists without a helmet.

Risk compensation is an interesting effect where increasing safety measures will lead people to engage in more risky behaviours.

For example, sailors wearing life jackets may try more risky maneuvers as they feel ‘safer’ if they get into trouble. If they weren’t wearing life jackets, they might not even try. So despite the ‘safety measures’ the overall level of risk remains the same due to behavioural change.

This happens in other areas of life. Known as self-licensing it is where people will allow themselves to indulge in more harmful behaviour after doing something ‘good’.

For example, people who take health supplements are more likely to engage in unhealthy behaviours as a result.

The moral of the story, of course, is to stay in the bunker.
 

Link to BMJ editorial ‘Bicycle helmets and the law’.

When giving reasons leads to worse decisions

We’re taught from childhood how important it is to explain how we feel and to always justify our actions. But does giving reasons always make things clearer, or could it sometimes distract us from our true feelings?

One answer came from a study led by psychology professor Timothy Wilson at the University of Virginia, which asked university students to report their feelings, either with or without being asked to provide reasons. What they found revealed just how difficult it can be to reliably discern our feelings when justifying our decisions.

Participants were asked to evaluate five posters of the kind that students might put up in their bedrooms. Two of the posters were of art – one was Monet’s water lilies, the other Van Gogh’s irises. The other three posters were a cartoon of animals in a balloon and two posters of photographs of cats with funny captions.

All the students had to evaluate the posters, but half the participants were asked to provide reasons for liking or disliking them. (The other half were asked why they chose their degree subject as a control condition.) After they had provided their evaluations the participants were allowed to choose a poster to take home.

So what happened? The control group rated the art posters positively (an average score of around 7 out of 9) and they felt pretty neutral about the humorous posters (an average score of around 4 out of 9). When given a choice of one poster to take home, 95% of them chose one of the art posters. No surprises there, the experimenters had already established that in general most students preferred the art posters.

But the group of students who had to give reasons for their feelings acted differently. This “reasons” group liked the art posters less (averaging about 6 out of 9) and the humorous posters more (about 5 to 6 out of 9). Most of them still chose an art poster to take home, but it was a far lower proportion – 64% – than the control group. That means people in this group were about seven times more likely to take a humorous poster home compared with the control group.

Here’s the twist. Some time after the tests, at the end of the semester, the researchers rang each of the participants and asked them questions about the poster they’d chosen: Had they put it up in their room? Did they still have it? How did they feel about it? How much would they be willing to sell it for? The “reasons” group were less likely to have put their poster up, less likely to have kept it up, less satisfied with it on average and were willing to part with it for a smaller average amount than the control group. Over time their reasons and feelings had shifted back in line with those of the control group – they didn’t like the humorous posters they had taken home, and so were less happy about their choice.

Trivial pursuit

The source of this effect, according to the researchers, is that when prompted to give reasons the participants focused on things that were easy to verbalise; they focused on the bright colours, or funny content of the humorous posters. It’s less easy to say exactly what’s pleasing about the more complex art classics. This was out of step with their feelings, so in the heat of the moment participants adjusted their feelings (a process I’ve written about before, called cognitive dissonance). After having the posters on their wall, the participants realised that they really did prefer the art posters all along.

The moral of the story isn’t that intuition is better than reason. We all know that in some situations our feelings are misleading and it is better to think about what we’re doing. But this study shows the reverse – in some situations introspection can interfere with using our feelings as a reliable guide to what we should do.

And this has consequences in adulthood, where the notion of expertise can mean struggling to discern when introspection is the best strategy. The researchers who carried out this study suggest that the distorting effect of reason-giving is most likely to occur in situations where people aren’t experts – most of the students who took part in the study didn’t have a lot of experience of thinking or talking about art. When experts are asked to give reasons for their feelings, research has found that their feelings aren’t distorted in the same way – their intuitions and explicit reasoning are in sync.

You might also see the consequences of this regularly in your line of work. Everybody knows that the average business meeting will spend the most time discussing trivial things, an effect driven by the ease with which each member of the meeting can chip in about something as inconsequential as what colour to paint the bike sheds, or when to plan a meeting to discuss the conclusions of that meeting. When we’re discussing complex issues, it isn’t so easy to make a contribution. The danger, of course, is that in a world which relies on justification and measurement of everything, those things that are most easily justified and measured will get priority over those things which are, in fact, most justified and important.

This is my BBC Future column from last week. The original is here. For what it is worth, I think the headings it received there are very distracting from the real implications of this work. If you’ve got this far, you can work out why for yourself!

Why you might prefer more pain

When is the best treatment for pain more pain? When you’re taking part in an experiment published by a Nobel prize winner and one of the leading lights in behavioural psychology, that is.

The psychologist in question is Daniel Kahneman; the experiment described by the self-explanatory title of: When More Pain Is Preferred to Less: Adding a Better End. In the study, Kahneman and colleagues looked at the pain participants felt by asking them to put their hands in ice-cold water twice (one trial for each hand). In one trial, the water was at 14C (59F) for 60 seconds. In the other trial the water was 14C for 60 seconds, but then rose slightly and gradually to about 15C by the end of an additional 30-second period.

Both trials were equally painful for the first sixty seconds, as indicated by a dial participants had to adjust to show how they were feeling. On average, participants’ discomfort started out at the low end of the pain scale and steadily increased. When people experienced an additional thirty seconds of slightly less cold water, discomfort ratings tended to level off or drop.

Next, the experimenters asked participants which kind of trial they would choose to repeat if they had to. You’ve guessed the answer: nearly 70% of participants chose to repeat the 90-second trial, even though it involved 30 extra seconds of pain. Participants also said that the longer trial was less painful overall, less cold, and easier to cope with. Some even reported that it took less time.

In case you think this is a freakish outcome of some artificial lab scenario, Kahneman saw a similar result when he interviewed patients who had undergone a colonoscopy examination – a procedure universally described as being decidedly unpleasant. Patients in Kahneman’s study group had colonoscopies that lasted from four to 69 minutes, but the duration of the procedure did not predict how they felt about it afterwards. Instead, it was the strength of their discomfort at its most intense, and the level of discomfort they felt towards the end of the procedure.

These studies support what Kahneman called the Peak-End rule – that our perceptions about an experience are determined by how it feels at its most intense, and how it feels at the end. The actual duration is irrelevant. It appears we don’t rationally calculate each moment of pleasure or pain using some kind of mental ledger. Instead, our memories filter how we feel about the things we’ve done and experienced, and our memories are defined more by the moments that seem most characteristic – the peaks and the finish – than by how we actually felt most of the time during the experience.

Kahneman wondered whether this finding meant that surgeons should extend painful operations needlessly to leave patients with happier memories, even though it would mean inflicting more pain overall. Others have asked whether this means that the most important thing about a holiday is that it includes some great times, rather than the length of time you are away for. (It certainly makes you think it would be worth doing if you could avoid the typical end to a holiday – queues, lumping heavy luggage around and jetlag.)

But I think the most important lesson of the Peak-End experiments is something else. Rather than saying that the duration isn’t important, the rule tells me that it is just as important to control how we mentally package our time. What defines an “experience” is somewhat arbitrary. If a weekend break where you forget everything can be as refreshing as a two-week holiday then maybe a secret to a happy life is to organise your time so it is broken up into as many distinct (and enjoyable) experiences as possible, rather than being just an unbroken succession of events which bleed into one another in memory.

All I need to do now is find the time to take a holiday and test my theory.

This is my BBC Future column, originally published last week. The original is here.

Did the eyes really stare down bicycle crime in Newcastle?

This is the first fortnightly column I’ll be writing for The Conversation, a creative commons news and opinion website that launched today. The site has been set up by a number of UK universities and bodies such as the Wellcome Trust, Nuffield Foundation and HEFCE, following the successful model of the Australian version of the site. Their plan is to unlock the massive amount of expertise held by UK academics and inject it into the public discourse. My plan is to give some critical commentary on headlines from the week's news which focus on neuroscience and psychology. If you've any headlines like you'd critiquing, let me know!


eyes

The headlines

Staring eyes ‘deter’ Newcastle University bike thieves

The poster that’s deterring bike thieves

The sign that cuts bike theft by 60%

The story

A picture of a large pair of eyes triggers feelings of surveillance in potential thieves, making them less likely to break the rules.

What they actually did

Researchers put signs with a large pair of eyes and the message “Cycle thieves: we are watching you” by the bike racks at Newcastle University.

They then monitored bike thefts for two years and found a 62% drop in thefts at locations with the signs. There was a 65% rise in the thefts from locations on campus without signs.

How plausible is it?

A bunch of studies have previously shown that subtle clues which suggest surveillance can alter moral behaviour. The classic example is the amount church-goers might contribute to the collection dish.

This research fits within the broad category of findings which show our decisions can be influenced by aspects of our environment, even those which shouldn’t logically affect them.

The signs are being trialled by Transport for London, and are a good example of the behavioural “nudges” promoted by the Cabinet Office’s (newly privatised) Behavioural Insight Unit. Policy makers love these kind of interventions because they are cheap. They aren’t necessarily the most effective way to change behaviour, but they have a neatness and “light touch” which means we’re going to keep hearing about this kind of policy.

Tom’s take

The problem with this study is that the control condition was not having any sign above bike racks – so we don’t know what it was about the anti-theft sign that had an effect. It could have been the eyes, or it could be message “we are watching you”. Previous research, cited in the study, suggests both elements have an effect.

The effect is obviously very strong for location, but it isn’t very strong in time. Thieves moved their thefts to nearby locations without signs – suggesting that any feelings of being watched didn’t linger. We should be careful about assuming that anything was working via the unconscious or irrational part of the mind.

If I were a bike thief and someone was kind enough to warn me that some bikes were being watched, and (by implication) others weren’t, I would rationally choose to do my thieving from an unwatched location.

Another plausible interpretation is that bike owners who were more conscious about security left their bikes at the signed locations. Such owners might have better locks and other security measures. Careless bike owners would ignore the signs, and so be more likely to park at unsigned locations and subsequently have their bikes nicked.

Read more

Nettle, D., Nott, K., & Bateson, M. (2012) “Cycle Thieves, We Are Watching You”: Impact of a Simple Signage Intervention against Bicycle Theft. PloS one, 7(12), e51738.

Tom Stafford does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.

Race perception isn’t automatic

Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.

For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.

When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.

Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.

Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.

Powerful force

The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).

Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.

It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.

So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).

Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.

Why money won’t buy you happiness

Here’s my column for BBC Future from last week. It was originally titled ‘Why money can’t buy you happiness‘, but I’ve just realised that it would be more appropriately titled if I used a “won’t” rather than a “can’t”. There’s a saying that people who think money can’t buy happiness don’t know where to shop. This column says, more or less, that knowing where to shop isn’t the problem, its shopping itself.

Hope a lottery win will make you happy forever? Think again, evidence suggests a big payout won’t make that much of a difference. Tom Stafford explains why.

 

Think a lottery win would make you happy forever? Many of us do, including a US shopkeeper who just scooped $338 million in the Powerball lottery – the fourth largest prize in the game’s history. Before the last Powerball jackpot in the United States, tickets were being snapped up at a rate of around 130,000 a minute. But before you place all your hopes and dreams on another ticket, here’s something you should know. All the evidence suggests a big payout won’t make that much of a difference in the end.

Winning the lottery isn’t a ticket to true happiness, however enticing it might be to imagine never working again and being able to afford anything you want. One study famously found that people who had big wins on the lottery ended up no happier than those who had bought tickets but didn’t win. It seems that as long as you can afford to avoid the basic miseries of life, having loads of spare cash doesn’t make you very much happier than having very little.

One way of accounting for this is to assume that lottery winners get used to their new level of wealth, and simply adjust back to a baseline level of happiness –something called the “hedonic treadmill”. Another explanation is that our happiness depends on how we feel relative to our peers. If you win the lottery you may feel richer than your neighbours, and think that moving to a mansion in a new neighbourhood would make you happy, but then you look out of the window and realise that all your new friends live in bigger mansions.

Both of these phenomena undoubtedly play a role, but the deeper mystery is why we’re so bad at knowing what will give us true satisfaction in the first place. You might think we should be able to predict this, even if it isn’t straightforward. Lottery winners could take account of hedonic treadmill and social comparison effects when they spend their money. So, why don’t they, in short, spend their winnings in ways that buy happiness?

Picking up points

Part of the problem is that happiness isn’t a quality like height, weight or income that can be easily measured and given a number (whatever psychologists try and pretend). Happiness is a complex, nebulous state that is fed by transient simple pleasures, as well as the more sustained rewards of activities that only make sense from a perspective of years or decades. So, perhaps it isn’t surprising that we sometimes have trouble acting in a way that will bring us the most happiness. Imperfect memories and imaginations mean that our moment-to-moment choices don’t always reflect our long-term interests.

It even seems like the very act of trying to measuring it can distract us from what might make us most happy. An important study by Christopher Hsee of the Chicago School of Business and colleagues showed how this could happen.

Hsee’s study was based around a simple choice: participants were offered the option of working at a 6-minute task for a gallon of vanilla ice cream reward, or a 7-minute task for a gallon of pistachio ice cream. Under normal conditions, less than 30% of people chose the 7-minute task, mainly because they liked pistachio ice cream more than vanilla. For happiness scholars, this isn’t hard to interpret –those who preferred pistachio ice cream had enough motivation to choose the longer task. But the experiment had a vital extra comparison. Another group of participants were offered the same choice, but with an intervening points system: the choice was between working for 6 minutes to earn 60 points, or 7 minutes to earn 100 points. With 50-99 points, participants were told they could receive a gallon of vanilla ice cream. For 100 points they could receive a gallon of pistachio ice cream. Although the actions and the effects are the same, introducing the points system dramatically affected the choices people made. Now, the majority chose the longer task and earn the 100 points, which they could spend on the pistachio reward – even though the same proportion (about 70%) still said they preferred vanilla.

Based on this, and other experiments [5], Hsee concluded that participants are maximising their points at the expense of maximising their happiness. The points are just a medium – something that allows us to get the thing that will create enjoyment. But because the points are so easy to measure and compare – 100 is obviously much more than 60 – this overshadows our knowledge of what kind of ice cream we enjoy most.

So next time you are buying a lottery ticket because of the amount it is paying out, or choosing wine by looking at the price, or comparing jobs by looking at the salaries, you might do well to remember to think hard about how much the bet, wine, or job will really promote your happiness, rather than simply relying on the numbers to do the comparison. Money doesn’t buy you happiness, and part of the reason for that might be that money itself distracts us from what we really enjoy.

 

When your actions contradict your beliefs

Last week’s BBC Future column. The original is here. Classic research, digested!

If at first you don’t succeed, lower your standards. And if you find yourself acting out of line with your beliefs, change them. This sounds like motivational advice from one of the more cynical self-help books, or perhaps a Groucho Marx line (“Those are my principles, and if you don’t like them… well, I have others…”), but in fact it is a caricature of one of the most famous theories in social psychology.

Leon Festinger’s Dissonance Theory is an account of how our beliefs rub up against each other, an attempt at a sort of ecology of mind. Dissonance Theory offers an explanation of topics as diverse as why oil company executives might not believe in climate change, why army units have brutal initiation ceremonies, and why famous books might actually be boring.

The classic study on dissonance theory was published by Festinger and James Carlsmith in 1959. You can find a copy thanks to the Classics in the History of Psychology archive. I really recommend reading the full thing. Not only is it short, but it is full of enjoyable asides. Back in the day psychology research was a lot more fun to write up.

Festinger and Carlsmith were interested in testing what happened when people acted out of line with their beliefs. To do this, they made their participants spend an hour doing two excruciatingly boring tasks. The first task was filling a tray with spools, emptying it, then filling it again (and so on). The second was turning 48 small pegs a quarter-turn clockwise; and then once that was finished, going back to the beginning and doing another quarter-turn for each peg (and so on). Only after this tedium, and at the point which the participants believed the experiment was over, did the real study get going. The experimenter said that they needed someone to fill in at the last minute and explain the tasks to the next subject. Would they mind? And also, could they make the points that “It was very enjoyable”, “I had a lot of fun”, “I enjoyed myself”, “It was very interesting”, “It was intriguing”, and “It was exciting”?

Of course the “experiment” was none of these things. But, being good people, with some pleading if necessary, they all agreed to explain the experiment to the next participant and make these points. The next participant was, of course, a confederate of the experimenter. We’re not told much about her, except that she was an undergraduate specifically hired for the role. The fact that all 71 participants in the experiment were male, and, that one of the 71 had to be excluded from the final analysis because he demanded her phone number so he could explain things further, suggests that Festinger and Carlsmith weren’t above ensuring that there were some extra motivational factors in the mix.

Money talks

For their trouble, the participants were paid $1, $20, or nothing. After explaining the task the original participants answered some questions about how they really felt about the experiment. At the time, many psychologists would have predicted that the group paid the most would be affected the most – if our feelings are shaped by rewards, the people paid $20 should be the ones who said they enjoyed it the most.

In fact, people paid $20 tended to feel the same about the experiment as the people paid nothing. But something strange happened with the people paid $1. These participants were more likely to say they really did find the experiment enjoyable. They judged the experiment as more important scientifically, and had the highest desire to participate in future similar experiments. Which is weird, since nobody should really want to spend another hour doing mundane, repetitive tasks.

Festinger’s Dissonance theory explains the result. The “Dissonance” is between the actions of the participants and their beliefs about themselves. Here they are, nice guys, lying to an innocent woman. Admittedly there are lots of other social forces at work – obligation, authority, even attraction. Festinger’s interpretation is that these things may play a role in how the participants act, but they can’t be explicitly relied upon as reasons for acting. So there is a tension between their belief that they are a nice person and the knowledge of how they acted. This is where the cash payment comes in. People paid $20 have an easy rationalisation to hand. “Sure, I lied”, they can say to themselves, “but I did it for $20”. The men who got paid the smaller amount, $1, can’t do this. Giving the money as a reason would make them look cheap, as well as mean. Instead, the story goes, they adjust their beliefs to be in line with how they acted. “Sure, the experiment was kind of interesting, just like I told that girl”, “It was fun, I wouldn’t mind being in her position” and so on.

So this is cognitive dissonance at work. Normally it should be a totally healthy process – after all, who could object to people being motivated to reduce contradictions in their beliefs (philosophers even make a profession of out this), but in circumstances where some of our actions or our beliefs exist for reasons which are too complex, too shameful, or too nebulous to articulate, it can lead to us changing perfectly valid beliefs, such as how boring and pointless a task was.

Fans of cognitive dissonance will tell you that this is why people forced to defend a particular position – say because it is their job – are likely to end up believing it. It can also suggest a reason for why military services, high school sports teams and college societies have bizarre and punishing initiation rituals. If you’ve been through the ritual, dissonance theory predicts, you’re much more likely to believe the group is a valuable one to be a part of (the initiation hurt, and you’re not a fool, so it must have been worth it right?).

For me, I think dissonance theory explains why some really long books have such good reputations, despite the fact that they may be as repetitive and pointless as Festinger’s peg task. Get to the end of a three-volume, several thousand page, conceptual novel and you’re faced with a choice: either you wasted your time and money, and you feel a bit of a fool; or the novel is brilliant and you are an insightful consumer of literature. Dissonance theory pushes you towards the latter interpretation, and so swells the crowd of people praising a novel that would be panned if it was 150 pages long.

Changing your beliefs to be in line with how you acted may not be the most principled approach. But it is certainly easier than changing how you acted.

BBC Column: Why cyclists enrage car drivers

Here is my latest BBC Future column. The original is here. This one proved to be more than usually controversial, not least because of some poorly chosen phrasing from yours truly. This is an updated version which makes what I’m trying to say clearer. If you think that I hate cyclists, or my argument relies on the facts of actual law breaking (by cyclists or drivers), or that I am making a claim about the way the world ought to be (rather than how people see it), then please check out this clarification I published on my personal blog after a few days of feedback from the column. One thing the experience has convinced me of is that cycling is a very emotional issue, and one people often interpret in very moral terms.

It’s not simply because they are annoying, argues Tom Stafford, it’s because they trigger a deep-seated rage within us by breaking the moral order of the road.

 

Something about cyclists seems to provoke fury in other road users. If you doubt this, try a search for the word “cyclist” on Twitter. As I write this one of the latest tweets is this: “Had enough of cyclists today! Just wanna ram them with my car.” This kind of sentiment would get people locked up if directed against an ethnic minority or religion, but it seems to be fair game, in many people’s minds, when directed against cyclists. Why all the rage?

I’ve got a theory, of course. It’s not because cyclists are annoying. It isn’t even because we have a selective memory for that one stand-out annoying cyclist over the hundreds of boring, non-annoying ones (although that probably is a factor). No, my theory is that motorists hate cyclists because they offend the moral order.

Driving is a very moral activity – there are rules of the road, both legal and informal, and there are good and bad drivers. The whole intricate dance of the rush-hour junction only works because everybody knows the rules and follows them: keeping in lane; indicating properly; first her turn, now mine, now yours. Then along come cyclists, innocently following what they see as the rules of the road, but doing things that drivers aren’t allowed to: overtaking queues of cars, moving at well below the speed limit or undertaking on the inside.

You could argue that driving is like so much of social life, it’s a game of coordination where we have to rely on each other to do the right thing. And like all games, there’s an incentive to cheat. If everyone else is taking their turn, you can jump the queue. If everyone else is paying their taxes you can dodge them, and you’ll still get all the benefits of roads and police.

In economics and evolution this is known as the “free rider problem”; if you create a common benefit  – like taxes or orderly roads – what’s to stop some people reaping the benefit without paying their dues? The free rider problem creates a paradox for those who study evolution, because in a world of selfish genes it appears to make cooperation unlikely. Even if a bunch of selfish individuals (or genes) recognise the benefit of coming together to co-operate with each other, once the collective good has been created it is rational, in a sense, for everyone to start trying to freeload off the collective. This makes any cooperation prone to collapse. In small societies you can rely on cooperating with your friends, or kin, but as a society grows the problem of free-riding looms larger and larger.

Social collapse

Humans seem to have evolved one way of enforcing order onto potentially chaotic social arrangements. This is known as “altruistic punishment”, a term used by Ernst Fehr and Simon Gachter in a landmark paper published in 2002 [4]. An altruistic punishment is a punishment that costs you as an individual, but doesn’t bring any direct benefit. As an example, imagine I’m at a football match and I see someone climb in without buying a ticket. I could sit and enjoy the game (at no cost to myself), or I could try to find security to have the guy thrown out (at the cost of missing some of the game). That would be altruistic punishment.

Altruistic punishment, Fehr and Gachter reasoned, might just be the spark that makes groups of unrelated strangers co-operate. To test this they created a co-operation game played by constantly shifting groups of volunteers, who never meet – they played the game from a computer in a private booth. The volunteers played for real money, which they knew they would take away at the end of the experiment. On each round of the game each player received 20 credits, and could choose to contribute up to this amount to a group project. After everyone had chipped in (or not), everybody (regardless of investment) got 40% of the collective pot.

Under the rules of the game, the best collective outcome would be if everyone put in all their credits, and then each player would get back more than they put in. But the best outcome for each individual was to free ride – to keep their original 20 credits, and also get the 40% of what everybody else put in. Of course, if everybody did this then that would be 40% of nothing.

In this scenario what happened looked like a textbook case of the kind of social collapse the free rider problem warns of. On each successive turn of the game, the average amount contributed by players went down and down. Everybody realised that they could get the benefit of the collective pot without the cost of contributing. Even those who started out contributing a large proportion of their credits soon found out that not everybody else was doing the same. And once you see this it’s easy to stop chipping in yourself – nobody wants to be the sucker.

Rage against the machine

A simple addition to the rules reversed this collapse of co-operation, and that was the introduction of altruistic punishment. Fehr and Gachter allowed players to fine other players credits, at a cost to themselves. This is true altruistic punishment because the groups change after each round, and the players are anonymous. There may have been no direct benefit to fining other players, but players fined often and they fined hard – and, as you’d expect, they chose to fine other players who hadn’t chipped in on that round. The effect on cooperation was electric. With altruistic punishment, the average amount each player contributed rose and rose, instead of declining. The fine system allowed cooperation between groups of strangers who wouldn’t meet again, overcoming the challenge of the free rider problem.

How does this relate to why motorists hate cyclists? The key is in a detail from that classic 2002 paper. Did the players in this game sit there calmly calculating the odds, running game theory scenarios in their heads and reasoning about cost/benefit ratios? No, that wasn’t the immediate reason people fined players. They dished out fines because they were mad as hell. Fehr and Gachter, like the good behavioural experimenters they are, made sure to measure exactly how mad that was, by asking players to rate their anger on a scale of one to seven in reaction to various scenarios. When players were confronted with a free-rider, almost everyone put themselves at the upper end of the anger scale. Fehr and Gachter describe these emotions as a “proximate mechanism”. This means that evolution has built into the human mind a hatred of free-riders and cheaters, which activates anger when we confront people acting like this – and it is this anger which prompts altruistic punishment. In this way, the emotion is evolution’s way of getting us to overcome our short-term self-interest and encourage collective social life.

So now we can see why there is an evolutionary pressure pushing motorists towards hatred of cyclists. Deep within the human psyche, fostered there because it helps us co-ordinate with strangers and so build the global society that is a hallmark of our species, is an anger at people who break the rules, who take the benefits without contributing to the cost. And cyclists trigger this anger when they use the roads but don’t follow the same rules as cars.

Now cyclists reading this might think “but the rules aren’t made for us – we’re more vulnerable, discriminated against, we shouldn’t have to follow the rules.” Perhaps true, but irrelevant when other road-users see you breaking rules they have to keep. Maybe the solution is to educate drivers that cyclists are playing an important role in a wider game of reducing traffic and pollution. Or maybe we should just all take it out on a more important class of free-riders, the tax-dodgers.

BBC Column: The psychology of the to-do list

My latest column for BBC Future. The original is here.

Your mind loves it when a plan comes together – the mere act of planning how to do something frees us from the burden of unfinished tasks.

If your daily schedule and email inbox are anything like mine, you’re often left a state of paralysis by the sheer bulk of outstanding tasks weighing on your mind. In this respect, David Allen’s book Getting Things Done is a phenomenon. An international best-seller and a personal productivity system known merely as GTD, it’s been hailed as being a “new cult for the info age”. The heart of the system is a way of organising the things you have to do, based on Allen’s experience of working with busy people and helping them to make time for the stuff they really want to do.

Ten years after the book was first published in 2001, scientific research caught up with the productivity guru, and it revealed exactly why his system is so popular – and so effective.

The key principle behind GTD is writing down everything that you need to remember, and filing it effectively. This seemingly simple point is based around far more than a simple filing cabinet and a to-do list. Allen’s system is like a to-do list in the same way a kitten is like a Bengal Tiger.

“Filing effectively”, in Allen’s sense, means a system with three parts: an archive, where you store stuff you might need one day (and can forget until then), a current task list in which everything is stored as an action, and a “tickler file” of 43 folders in which you organise reminders of things to do (43 folders because that’s one for the next thirty-one days plus the next 12 months).

The current task list is a special kind of to-do list because all the tasks are defined by the next action you need to take to progress them. This simple idea is remarkably effective in helping resolving the kind of inertia that stops us resolving items on our lists. As an example, try picking a stubborn item from your own to-do list and redefining it until it becomes something that actually involves moving one of your limbs. Something necessary but unexciting like “Organise a new fence for the garden” becomes “ring Marcus and ask who fixed his fence”. Or, even better with further specifics on how to move your fingers, “dial 2 626 81 19 and ask Marcus who fixed his fence”.

Breaking each task down into its individual actions allows you to convert your work into things you can either physically do, or forget about, happy in the knowledge that it is in the system. Each day you pick up the folder for that day and either action the item, or defer it to another folder for a future day or month. Allen is fanatical on this – he wants people to make a complete system for self-management, something that will do the remembering and monitoring for you, so your mind is freed up.

So what’s the psychology that backs this up? Roy Baumeister and EJ Masicampo at Florida State University were interested in an old phenomenon called the Zeigarnik Effect, which is what psychologists call our mind’s tendency to get fixated on unfinished tasks and forget those we’ve completed. You can see the effect in action in a restaurant or bar – you can easily remember a drinks order, but then instantly forget it as soon as you’ve put the drinks down. I’ve mentioned this effect before when it comes to explaining the psychology behind Tetris.

A typical way to test for the Zeigarnik Effect is to measure if an unfulfilled goal interferes with the ability to carry out a subsequent task. Baumeister and Masicampo discovered that people did worse on a brainstorming task when they were prevented from finishing a simple warm-up task – because the warm-up task was stuck in their active memory. What Baumeister and Masicampo did next is the interesting thing; they allowed some people to make plans to finish the warm-up task. They weren’t allowed to finish it, just to make plans on how they’d finish it. Sure enough, those people allowed to make plans were freed from the distracting effect of leaving the warm-up task unfinished.

Back to the GTD system, its key insight is that your attention has a limited capacity – you can only fit so much in your mind at any one time. The GTD archive and reminder system acts as a plan for how you’ll do things, releasing the part of your attention that it struggling to hold each item on your to-do list in mind. Rather than remove things from our sight by doing them, Allen, and the research, suggest we merely need to have a good plan of when and how to do them. The mere act of planning how to finish something satisfies the itch that keeps uncompleted tasks in our memory.

Deeper into forensic bias

For the recent Observer article on forensic science and the psychological biases that affect it, I spoke to cognitive scientist Itiel Dror about his work.

I could only include some brief quotes from a more in-depth exchange, so for those wanting more on the psychology of forensic examining, here’s Dror on how evidence can be skewed and why these effects have been ignored for so long.

What do you think has been the turning point for the forensic science community in terms of beginning to accept the role of cognitive bias in interpretation of evidence?

I think the clear cut scientific research with actual forensic examiners which was a within-subject experimental design, showing that the *same* expert, examining the *same* evidence, can reach different conclusions when they are affected by bias. The problem was also demonstrated in fingerprinting and DNA, very robust forensic domains.

I think you are very right to say that they have ‘began’. There has been a change, for example, the UK Forensic Regulator is now onboard. But there is still a way to go.

Which area of forensic science do you think is currently most susceptible to cognitive bias?

It will be the forensic science areas in which, as I like to say, the human examiner is the main instrument of analysis. These are most of the forensic domains: fingerprinting, DNA, CCTV images, firearms, shoe and tire marks, document examination, and so on. When there is no instrument that says ‘match’ or ‘no-match’ and it is in the ‘eye of the beholder’ to make the judgement, then subjectivity comes in, and is open to cognitive bias.

Essentially, forensic areas in which there are no objective criteria: where it is the forensic expert who compares visual patterns and determines if they are ‘sufficiently similar’ or ‘sufficiently consistent’. For example, whether two fingerprints were made by the same finger, whether two bullets were fired from the same gun, whether two signatures were made by the same person. Such determinations are governed by a variety of cognitive processes.

The cognitive nature of subjectivity is that it can be influenced and biased by extraneous contextual information. Forensic scientists work within a variety of such influences: from knowing the nature and details of the crime, to being indirectly pressurized by detectives, from seeing the ‘target’, to working within and as part of the police, from computer generated meta-data, to appearing in courts within an adversarial criminal justice system, and so on. The contextual influences are many and they come in many forms, some of which are subtle. So, many, most of the forensic areas are vulnerable.

It seems there is a reluctance to change procedures to minimise cognitive bias. Where does the resistance come from?

There are still forensic examiners who think that are immune to context and do not understand, let alone accept, the existence and danger of cognitive bias. They often confuse ‘bias’ (as in being racist, anti-Semitic etc) with cognitive bias; and this makes some of them think that it is an ethical issue. Forensic examiners rarely, if at all, receive training in this area and in the rare occasions that they do, they get bad training from people who do not specialise in providing training about cognitive bias in forensics.

The forensic community, as the military, police, and so on, are all very hard to change; there is a strong culture within those organisations. It is especially hard to promote change when errors are not as apparent as in other domains. If the police shoot an innocent person, then they very quickly know that they made a mistake, if a surgeon amputates the wrong leg, then they know very quickly that they made a mistake. In contrast, in the forensic domain, in real criminal cases, we do not know the ground truth, and do not really know if a mistake has happened or not. Only in very rare and special circumstances do errors surface (as in the Mayfield and McKie cases).

The courts have basically for the most part blindly accepted most of the forensic evidence. So, the examiners see no reason to change, if the courts accepts their evidence, then that is that. This may be changing. The hope is that judges will be more aware of the danger of cognitive bias and not accept forensic conclusions that are tainted with bias.
 

Link to further reading from Itiel Dror.

A psychological bias in DNA testing

I’ve got a piece in today’s Observer about how psychological biases can affect DNA testing from crime scenes.

It seems counter-intuitive, but that’s largely because we’ve come to accept the idea that DNA is a sort of individual genetic ‘serial number’ that just needs to be ‘read off’ from a biological sample – but the reality is far more complex.

Despite this, the psychological power of DNA evidence is huge and has misled several investigations that have privileged mistaken DNA results above everything else – including the case of a shadowy transsexual serial killer that led the German police astray.

The piece riffs on the work of psychologist Itiel Dror who was the first to show that the identification of people by their fingerprints could be biased by extraneous information and he’s now found the same with certain types of DNA analysis.

More at the link below.
 

Link to Observer article on the psychology of forensic identification.