The two word games that trick almost everyone

270px-Cowicon.svgPlaying two classic schoolyard games can help us understand everything from sexism to the power of advertising.

There’s a word game we used to play at my school, or a sort of trick, and it works like this. You tell someone they have to answer some questions as quickly as possible, and then you rush at them the following:

“What’s one plus four?!”
“What’s five plus two?!”
“What’s seven take away three?!”
“Name a vegetable?!”

Nine times out of 10 people answer the last question with “Carrot”.

Now I don’t think the magic is in the maths questions. Probably they just warm your respondent up to answering questions rapidly. What is happening is that, for most people, most of the time, in all sorts of circumstances, carrot is simply the first vegetable that comes to mind.

This seemingly banal fact reveals something about how our minds organise information. There are dozens of vegetables, and depending on your love of fresh food you might recognise a good proportion. If you had to list them you’d probably forget a few you know, easily reaching a dozen and then slowing down. And when you’re pressured to name just one as quickly as possible, you forget even more and just reach for the most obvious vegetable you can think of – and often that’s a carrot.

In cognitive science, we say the carrot is “prototypical” – for our idea of a vegetable, it occupies the centre of the web of associations which defines the concept. You can test prototypicality directly by timing how long it takes someone to answer whether the object in question belongs to a particular category. We take longer to answer “yes” if asked “is a penguin a bird?” than if asked “is a robin a bird?”, for instance. Even when we know penguins are birds, the idea of penguins takes longer to connect to the category “bird” than more typical species.

So, something about our experience of school dinners, being told they’ll help us see in the dark, the 37 million tons of carrots the world consumes each year, and cartoon characters from Bugs Bunny to Olaf the Snowman, has helped carrots work their way into our minds as the prime example of a vegetable.

The benefit to this system of mental organisation is that the ideas which are most likely to be associated are also the ones which spring to mind when you need them. If I ask you to imagine a costumed superhero, you know they have a cape, can probably fly and there’s definitely a star-shaped bubble when they punch someone. Prototypes organise our experience of the world, telling us what to expect, whether it is a superhero or a job interview. Life would be impossible without them.

The drawback is that the things which connect together because of familiarity aren’t always the ones which should connect together because of logic. Another game we used to play proves this point. You ask someone to play along again and this time you ask them to say “Milk” 20 times as fast as they can. Then you challenge them to snap-respond to the question “What do cows drink?”. The fun is in seeing how many people answer “milk”. A surprising number do, allowing you to crow “Cows drink water, stupid!”. We drink milk, and the concept is closely connected to the idea of cows, so it is natural to accidentally pull out the answer “milk” when we’re fishing for the first thing that comes to mind in response to the ideas “drink” and “cow”.

Having a mind which supplies ready answers based on association is better than a mind which never supplies ready answers, but it can also produce blunders that are much more damaging than claiming cows drink milk. Every time we assume the doctor is a man and the nurse is woman, we’re falling victim to the ready answers of our mental prototypes of those professions. Such prototypes, however mistaken, may also underlie our readiness to assume a man will be a better CEO, or a philosophy professor won’t be a woman. If you let them guide how the world should be, rather than what it might be, you get into trouble pretty quickly.

Advertisers know the power of prototypes too, of course, which is why so much advertising appears to be style over substance. Their job isn’t to deliver a persuasive message, as such. They don’t want you to actively believe anything about their product being provably fun, tasty or healthy. Instead, they just want fun, taste or health to spring to mind when you think of their product (and the reverse). Worming their way into our mental associations is worth billions of dollars to the advertising industry, and it is based on a principle no more complicated than a childhood game which tries to trick you into saying “carrots”.

This is my BBC Future column from last week. The original is here. And, yes, I know that baby cows actually do drink milk.

The Devil’s Wager: when a wrong choice isn’t an error

Devil faceThe Devil looks you in the eyes and offers you a bet. Pick a number and if you successfully guess the total he’ll roll on two dice you get to keep your soul. If any other number comes up, you go to burn in eternal hellfire.

You call “7” and the Devil rolls the dice.

A two and a four, so the total is 6 — that’s bad news.

But let’s not dwell on the incandescent pain of your infinite and inescapable future, let’s think about your choice immediately before the dice were rolled.

Did you make a mistake? Was choosing “7” an error?

In one sense, obviously yes. You should have chosen 6.

But in another important sense you made the right choice. There are more combinations of dice outcomes that add to 7 than to any other number. The chances of winning if you bet 7 are higher than for any other single number.

The distinction is between a particular choice which happens to be wrong, and a choice strategy which is actually as good as you can do in the circumstances. If we replace the Devil’s Wager with the situations the world presents you, and your choice of number with your actions in response, then we have a handle on what psychologists mean when they talk about “cognitive error” or “bias”.

In psychology, the interesting errors are not decisions that just happen to turn out wrong. The interesting errors are decisions which people systematically get wrong, and get wrong in a particular way. As well as being predictable, these errors are interesting because they must be happening for a reason.

If you met a group of people who always bet “6” when gambling with the Devil, you’d be an incurious person if you assumed they were simply idiots. That judgement doesn’t lead anywhere. Instead, you’d want to find out what they believe that makes them think that’s the right choice strategy. Similarly, when psychologists find that people will pay more to keep something than they’d pay to obtain it or are influenced by irrelevant information in the judgements of risk, there’s no profit to labelling this “irrationality” and leaving it at that. The interesting question is why these choices seem common to so many people. What is it about our minds that disposes us to make these same errors, to have in common the same choice strategies?

You can get traction on the shape of possible answers from the Devil’s Wager example. In this scenario, why would you bet “6” rather than “7”? Here are three possible general reasons, and their explanations in the terms of the Devil’s Wager, and also a real example.

 

1. Strategy is optimised for a different environment

If you expected the Devil to role a single loaded die, rather than a fair pair of dice, then calling “6” would be the best strategy, rather than a sub-optimal one.
Analogously, you can understand a psychological bias by understanding which environment is it intended to match. If I love sugary foods so much it makes me fat, part of the explanation may be that my sugar cravings evolved at a point in human history when starvation was a bigger risk than obesity.

 

2. Strategy is designed for a bundle of choices

If you know you’ll only get to pick one number to cover multiple bets, your best strategy is to pick a number which works best over all bets. So if the Devil is going to give you best of ten, and most of the time he’ll roll a single loaded die, and only some times roll two fair dice, then “6” will give you the best total score, even though it is less likely to win for the two-fair-dice wager.

In general, what looks like a poor choice may be the result of strategy which treats a class of decisions as the same, and produces a good answer for that whole set. It is premature to call our decision making irrational if we look at a single choice, which is the focus of the psychologist’s experiment, and not the related set of choice of which it is part.

An example from the literature may be the Mere Exposure Effect, where we favour something we’ve seen before merely because we’ve seen it before. In experiments, this preference looks truly arbitrary, because the experiment decided which stimuli to expose us to and which to withhold, but in everyday life our familiarity with things tracks important variables such as how common, safe or sought out things are. The Mere Exposure Effect may result from a feature of our minds that assumes, all other things being equal, that familiar things are preferable, and that’s probably a good general strategy.

 

3. Strategy uses a different cost/benefit analysis

Obviously, we’re assuming everyone wants to save their soul and avoid damnation. If you felt like you didn’t deserve heaven, harps and angel wings, or that hellfire sounds comfortably warm, then you might avoid making the bet-winning optimal choice.

By extension, we should only call a choice irrational or suboptimal if we know what people are trying to optimise. For example, it looks like people systematically under-explore new ways of doing things when learning skills. Is this reliance on habit, similar to confirmation bias when exploring competing hypotheses, irrational? Well, in the sense that it slows your learning down, it isn’t optimal, but if it exists because exploration carries a risk (you might get the action catastrophically wrong, you might hurt yourself), or that the important thing is to minimise the cost of acting (and habitual movements require less energy), then it may in fact be better than reckless exploration.

 

So if we see a perplexing behaviour, we might reach for one of these explanations to explain it: The behaviour is right for a different environment, a wider set of choices, or a different cost/benefit analysis. Only when we are confident that we understand the environment (either evolutionary, or of training) which drives the behaviour, and the general class of choices of which it is part, and that we know which cost-benefit function the people making the choices are using, should we confidently say a choice is an error. Even then it is pretty unprofitable to call such behaviour irrational – we’d want to know why people make the error. Are they unable to calculate the right response? Mis-perceiving the situation?

A seemingly irrational behaviour is a good place to start investigating the psychology of decision making, but labelling behaviour irrational is a terrible place to stop. The topic really starts to get interesting when we start to ask why particular behaviours exist, and try to understand their rationality.

 

Previously/elsewhere:

Irrational? Decisions and decision making in context
My ebook: For argument’s sake: evidence that reason can change minds, which explores our over-enthusiasm for evidence that we’re irrational.

Irrational? Decisions and decision making in context

IMG_0034Nassim Nicholas Taleb, author of Fooled by Randomness:

Finally put my finger on what is wrong with the common belief in psychological findings that people “irrationally” overestimate tail probabilities, calling it a “bias”. Simply, these experimenters assume that people make a single decision in their lifetime! The entire field of psychology of decisions missed the point.

His argument seems to be that risks seem different if you view them from a lifetime perspective, where you might make choices about the same risk again and again, rather than consider as one-offs. What might be a mistake for a one-off risk could be a sensible strategy for the same risk repeated in a larger set.

He goes on to take a swipe at ‘Nudges’, the idea that you can base policies around various phenomena from the psychology of decision making. “Clearly”, he adds, “psychologists do not know how to use ‘probability'”.

This is maddeningly ignorant, but does have a grain of truth to it. The major part of the psychology of decision making is understanding why things that look like bias or error exist. If a phenomenon, such as overestimating low probability events, is pervasive, it must be for a reason. A choice that looks irrational when considered on its own might be the result of a sensible strategy when considered over a lifetime, or even over evolutionary time.

Some great research in decision making tries to go beyond simple bias phenomenon and ask what underlying choice is being optimised by our cognitive architecture. This approach gives us the Simple Heuristics Which Make Us Smart of Gerd Gigerenzer (which Taleb definitely knows about since he was a visiting fellow in Gigerenzer’s lab), as well as work which shows that people estimate risks differently if they experience the outcomes rather than being told about them, work which shows that our perceptual-motor system (which is often characterised as an optimal decision maker) has the same amount of bias as our more cognitive decisions; and work which shows that other animals, with less cognitive/representational capacity, make analogues of many classic decision making errors. This is where the interesting work in decision making is happening, and it all very much takes account of the wider context of individual decisions. So saying that the entire field missed the point seems…odd.

But the grain of truth the accusation is that the psychology of decision making has been popularised in a way that focusses on one-off decisions. The nudges of behavioural economics tend to be drammatic examples of small interventions which have large effects in one-off measures, such as giving people smaller plates makes them eat less. The problem with these interventions is that even if they work in the lab, they tend not to work long-term outside the lab. People are often doing what they do for a reason – and if you don’t affect the reasons you get the old behaviour reasserting itself as people simply adapt to any nudge you’ve introduced Although the British government is noted for introducing a ‘Nudge Unit‘ to apply behavioural science in government policies, less well known is a House of Lords Science and Technology Committee report ‘Behavioural Change’, which highlights the limitations of this approach (and is well worth reading to get an idea of the the importance of ideas beyond ‘nudging’ in behavioural change).

Taleb is right that we need to drop the idea that biases in decision making automatically attest to our irrationality. As often as not they reflect a deeper rationality in how our minds deal with risk, choice and reward. What’s sad is that he doesn’t recognise how much work on how to better understand bias already exists.

How to formulate a good resolution

We could spend all year living healthier, more productive lives, so why do we only decide to make the change at the start of the year? BBC Future’s psychologist Tom Stafford explains.

Many of us will start 2016 with resolutions – to get fit, learn a new skill, eat differently. If we really want to do these things, why did we wait until an arbitrary date which marks nothing more important than a timekeeping convention? The answer tells us something important about the psychology of motivation, and about what popular theories of self-control miss out.

What we want isn’t straightforward. At bedtime you might want to get up early and go for a run, but when your alarm goes off you find you actually want a lie-in. When exam day comes around you might want to be the kind of person who spent the afternoons studying, but on each of those afternoons you instead wanted to hang out with your friends.

You could see these contradictions as failures of our self-control: impulses for temporary pleasures manage to somehow override our longer-term interests. One fashionable theory of self-control, proposed by Roy Baumeister at Florida State University, is the ‘ego-depletion’ account. This theory states that self-control is like a muscle. This means you can exhaust it in the short-term – meaning that every temptation you resist makes it more likely that you’ll yield to the next temptation, even if it is a temptation to do something entirely different.

Some lab experiments appear to support this limited resource model of willpower. People who had to resist the temptation to eat chocolates were subsequently less successful at solving difficult puzzles which required the willpower to muster up enough concentration to complete them, for instance. Studies of court records, meanwhile, found that the more decisions a parole board judge makes without a meal break, the less lenient they become. Perhaps at the end of a long morning, the self-control necessary for a more deliberated judgement has sapped away, causing them to rely on a harsher “keep them locked up” policy.

A corollary of the ‘like a muscle’ theory is that in the long term, you can strengthen your willpower with practice. So, for example, Baumeister found that people who were assigned two weeks of trying to keep their back straight whenever possible showed improved willpower when asked back into the lab.

Yet the ‘ego-depletion’ theory has critics. My issue with it is that it reduces our willpower to something akin to oil in a tank. Not only does this seem too simplistic, but it sidesteps the core problem of self-control: who or what is controlling who or what? Why is it even the case that we can want both to yield to a temptation, and want to resist it at the same time?

Also, and more importantly, that theory also doesn’t give an explanation why we wait for New Year’s Day to begin exerting our self-control. If your willpower is a muscle, you should start building it up as soon as possible, rather than wait for an arbitrary date.

A battle of wills

Another explanation may answer these questions, although it isn’t as fashionable as ego-depletion. George Ainslie’s book ‘Breakdown of Will‘ puts forward a theory of the self and self-control which uses game theory to explain why we have trouble with our impulses, and why our attempts to control them take the form they do.

Ainslie’s account begins with the idea that we have, within us, a myriad of competing impulses, which exist on different time-scales: the you that wants to stay in bed five more minutes, the you that wants to start the day with a run, the you that wants to be fit for the half-marathon in April. Importantly, the relative power of these impulses changes as they get nearer in time: the early start wins against the lie-in the day before, but it is a different matter at 5am. Ainslie has a detailed account of why this is, and it has some important implications for our self-control.

According to this theory, our preferences are unstable and inconsistent, the product of a war between our competing impulses, good and bad, short and long-term. A New Year’s resolution could therefore be seen as an alliance between these competing motivations, and like any alliance, it can easily fall apart. Addictions are a good example, because the long-term goal (“not to be an alcoholic”) requires the coordination of many small goals (“not to have a drink at 4pm;” “not at 5pm;” “not at 6pm,” and so on), none of which is essential. You can have a drink at 4pm and still be a moderate drinker. You can even have a drink also at 5pm, but somewhere along the line all these small choices add up to a failure to keep to the wider goal. Similarly, if you want to get fit in 2016, you don’t have to go for a jog on 1 January, or even on 2 January, but if you don’t start doing exercise on one particular day then you will never meet your larger goal.

From Ainslie’s perspective willpower is a bargaining game played by the forces within ourselves, and like any conflict of interest, if the boundary between acceptable and unacceptable isn’t clearly defined then small infractions can quickly escalate. For this reason, Ainslie says, resolutions cluster around ‘clean lines’, sharp distinctions around which no quibble is brooked. The line between moderate and problem drinking isn’t clear (and liable to be even less clear around your fourth glass), but the line between teetotal and drinker is crystal.

This is why advice on good habits is often of the form “Do X every day”, and why diets tend to absolutes: “No gluten;” “No dessert;” “Fasting on Tuesdays and Thursdays”. We know that if we leave the interpretation open to doubt, although our intentions are good, we’ll undermine our resolutions when we’re under the influence of our more immediate impulses.

And, so, Ainslie gives us an answer to why our resolutions start on 1 January. The date is completely arbitrary, but it provides a clean line between our old and new selves.

The practical upshot of the theory is that if you make a resolution, you should formulate it so that at every point in time it is absolutely clear whether you are sticking to it or not. The clear lines are arbitrary, but they help the truce between our competing interests hold.

Good luck for your 2016 resolutions!

Web of illusion: how the internet affects our confidence in what we know

The internet can give us the illusion of knowledge, making us think we are smarter than we really are. Fortunately, there may be a cure for our arrogance, writes psychologist Tom Stafford.

The internet has a reputation for harbouring know-it-alls. Commenters on articles, bloggers, even your old school friends on Facebook all seem to swell with confidence in their understanding of exactly how the world works (and they are eager to share that understanding with everyone and anyone who will listen). Now, new research reveals that just having access to the world’s information can induce an illusion of overconfidence in our own wisdom. Fortunately the research also shares clues as to how that overconfidence can be corrected.

Specifically, we are looking at how the internet affects our thinking about what we know, a topic psychologists call metacognition. When you know you are boasting, you are being dishonest, but you haven’t made any actual error in estimating your ability. If you sincerely believe you know more than you do then you have made an error. The research suggests that an illusion of understanding may actually be incredibly common, and that this metacognitive error emerges in new ways in the age of the internet.

In a new paper, Matt Fisher of Yale University, considers a particular type of thinking known as transactive memory, which is the idea that we rely on other people and other parts of the world – books, objects – to remember things for us. If you’ve ever left something you needed for work by the door the night before, then you’ve been using transactive memory.

Part of this phenomenon is the tendency to then confuse what we really know in our personal memories, with what we have easy access to, the knowledge that is readily available in the world, or with which we are merely familiar without actually understanding in depth. It can feel like we understand how a car works, the argument goes, when in fact we are merely familiar with making it work. I press the accelerator and it goes forward, neglecting to realise that I don’t really know how it goes forward.

Fisher and colleagues were interested in how this tendency interacts with the internet age. They asked people to provide answers to factual questions, such as “Why are there time zones?”. Half of the participants were instructed to look up the answers on the internet before answering, half were told not to look up the answers on the internet. Next, all participants were asked how confidently they could explain the answers to a second series of questions (seperate, but also factual, questions such as “Why are cloudy nights warmer?” or “How is vinegar made?”).

Sure enough, people who had just been searching the internet for information were significantly more confident about their understanding of the second set of questions. Follow up studies confirmed that these people really did think the knowledge was theirs: they were still more confident if asked to indicate their response on a scale representing different levels of understanding with pictures of brain-scan activity (a ploy that was meant to emphasise that the information was there, in their heads). The confidence effect even persisted when the control group were provided answer material and the internet-search group were instructed to search for a site containing the exact same answer material. Something about actively searching for information on the internet specifically generated an illusion that the  knowledge was in the participants’ own heads.

If the feeling of controlling information generates overconfidence in our own wisdom, it might seem that the internet is an engine for turning us all into bores. Fortunately another study, also published this year, suggests a partial cure.

Amanda Ferguson of the University of Toronto and colleagues ran a similar study, except the set-up was in reverse: they asked participants to provide answers first and, if they didn’t know them, search the internet afterwards for the correct information (in the control condition participants who said “I don’t know” were let off the hook and just moved on to the next question). In this set up, people with access to the internet were actually less willing to give answers in the first place than people in the no internet condition. For these guys, access to the internet shut them up, rather than encouraging them to claim that they knew it all. Looking more closely at their judgements, it seems the effect wasn’t simply that the fact-checking had undermined their confidence. Those that knew they could fall back on the web to check the correct answer didn’t report feeling less confident within themselves, yet they were still less likely to share the information and show off their knowledge.

So, putting people in a position where they could be fact-checked made them more cautious in their initial claims. The implication I draw from this is that one way of fighting a know-it-all, if you have the energy, is to let them know that they are going to be thoroughly checked on whether they are right or wrong. It might not stop them researching a long answer with the internet, but it should slow them down, and diminish the feeling that just because the internet knows some information, they do to.

It is frequently asked if the internet is changing how we think. The answer, this research shows, is that the internet is giving new fuel to the way we’ve always thought. It can be both a cause of overconfidence,  when we mistake the boundary between what we know and what is available to us over the web, and it can be a cause of uncertainty, when we anticipate that we’ll be fact-checked using the web on the claims we make. Our tendencies to overestimate what we know, to use information that is readily available as a substitute for our own knowledge, and to worry about being caught out are all constants on how we think. The internet slots into this tangled cognitive ecosystem, from which endless new forms evolve.

This is my BBC Future column from earlier this week. The original is here

Conspiracy theory as character flaw

NatureBrainPhilosophy professor Quassim Cassam has a piece in Aeon arguing that conspiracy theorists should be understood in terms of the intellectual vices. It is a dead-end, he says, to try to understand the reasons someone gives for believing a conspiracy theory. Consider someone called Oliver who believes that 9/11 was an inside job:

Usually, when philosophers try to explain why someone believes things (weird or otherwise), they focus on that person’s reasons rather than their character traits. On this view, the way to explain why Oliver believes that 9/11 was an inside job is to identify his reasons for believing this, and the person who is in the best position to tell you his reasons is Oliver. When you explain Oliver’s belief by giving his reasons, you are giving a ‘rationalising explanation’ of his belief.

The problem with this is that rationalising explanations take you only so far. If you ask Oliver why he believes 9/11 was an inside job he will, of course, be only too pleased to give you his reasons: it had to be an inside job, he insists, because aircraft impacts couldn’t have brought down the towers. He is wrong about that, but at any rate that’s his story and he is sticking to it. What he has done, in effect, is to explain one of his questionable beliefs by reference to another no less questionable belief.

So the problem is not their beliefs as such, but why the person came to have the whole set of (misguided) beliefs in the first place. The way to understand conspiracists is in terms of their intellectual character, Cassam argues, the vices and virtues that guide as us as thinking beings.

A problem with this account is that – looking at the current evidence – character flaws don’t seem that strong a predictor of conspiracist beliefs. The contrast is with the factors that have demonstrable influence on people’s unusual beliefs. For example, we know that social influence and common cognitive biases have a large, and measurable, effect on what we believe. The evidence isn’t so good on how intellectual character traits such as closed/open-mindedness, skepticism/gullibility are constituted and might affect conspiracist beliefs. That could be because the personality/character trait approach is inherently limited, or just that there is more work to do. One thing is certain, whatever the intellectual vices are that lead to conspiracy theory beliefs, they are not uncommon. One study suggested that 50% of the public endorse at least one conspiracy theory.

Link : Bad Thinkers by Quassim Cassam

Paper on personality and conspiracy theories: Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs

Paper on widespread endorsement of conspiracy theories: Conspiracy Theories and the Paranoid Style(s) of Mass Opinion

Previously on Mindhacks.com That’s what they want you to believe

And a side note, this view that the problem with conspiracy theorists isn’t the beliefs helps explain why throwing facts at them doesn’t help, better to highlight the fallacies in how they are thinking.

Downsides of being a convincing liar

People who take shortcuts can trick themselves into believing they are smarter than they are, says Tom Stafford, and it comes back to bite them.

Honesty may be the best policy, but lying has its merits – even when we are deceiving ourselves. Numerous studies have shown that those who are practised in the art of self-deception might be more successful in the spheres of sport and business. They might even be happier than people who are always true to themselves. But is there ever a downside to believing our own lies?

An ingenious study by Zoe Chance of Yale University tested the idea, by watching what happens when people cheat on tests.

Chance and colleagues ran experiments which involved asking students to answer IQ and general knowledge questions. Half the participants were given a copy of the test paper which had – apparently in error – been printed with the answers listed at the bottom. This meant they had to resist the temptation to check or improve their answers against the real answers as they went along.

Irresistible shortcut

As you’d expect, some of these participants couldn’t help but cheat. Collectively, the group that had access to the answers performed better on the tests than participants who didn’t – even though both groups of participants were selected at random from students at the same university, so were, on average, of similar ability.  (We can’t know for sure who was cheating – probably some of the people who had answers would have got high scores even without the answers – but it means that the average performance in the group was partly down to individual smarts, and partly down to having the answers at hand.)

The crucial question for Chance’s research was this: did people in the “cheater” group know that they’d been relying on the answers? Or did they attribute their success in the tests solely to their own intelligence?

The way the researchers tested this was to ask the students to predict how well they’d do on a follow-up test. They were allowed to quickly glance over the second test sheet so that they could see that it involved the same kind of questions – and, importantly, that no answers had been mistakenly been printed at the bottom this time. The researchers reasoned that if the students who had cheated realised that cheating wasn’t an option the second time around, they should predict they wouldn’t do as well on this second test.

Not so. Self-deception won the day. The people who’d had access to the answers predicted, on average, that they’d get higher scores on the follow-up – equivalent to giving them something like a 10-point IQ boost. When tested, of course, they scored far lower.

The researchers ran another experiment to check that the effect was really due to the cheaters’ inflated belief in their own abilities. In this experiment, students were offered a cash reward for accurately predicting their scores on the second test. Sure enough, those who had been given the opportunity to cheat overestimated their ability and lost out – earning 20% less than the other students.

The implication is that people in Chance’s experiment – people very much like you and me – had tricked themselves into believing they were smarter than they were. There may be benefits from doing this – confidence, satisfaction, or more easily gaining the trust of others – but there are also certainly disadvantages. Whenever circumstances change and you need to accurately predict how well you’ll do, it can cost to believe you’re better than you are.

That self-deception has its costs has some interesting implications. Morally, most of us would say that self-deception is wrong. But aside from whether self-deception is undesirable, we should expect it to be present in all of us to some degree (because of the benefits), but to be limited as well (because of the costs).

Self-deception isn’t something that is always better in larger doses – there must be an amount of it for which the benefits outweigh the costs, most of the time. We’re probably all self-deceiving to some degree. The irony being, because it is self-deception, we can’t know how often.

This is my BBC Future article from last week. The original is here