Downsides of being a convincing liar

People who take shortcuts can trick themselves into believing they are smarter than they are, says Tom Stafford, and it comes back to bite them.

Honesty may be the best policy, but lying has its merits – even when we are deceiving ourselves. Numerous studies have shown that those who are practised in the art of self-deception might be more successful in the spheres of sport and business. They might even be happier than people who are always true to themselves. But is there ever a downside to believing our own lies?

An ingenious study by Zoe Chance of Yale University tested the idea, by watching what happens when people cheat on tests.

Chance and colleagues ran experiments which involved asking students to answer IQ and general knowledge questions. Half the participants were given a copy of the test paper which had – apparently in error – been printed with the answers listed at the bottom. This meant they had to resist the temptation to check or improve their answers against the real answers as they went along.

Irresistible shortcut

As you’d expect, some of these participants couldn’t help but cheat. Collectively, the group that had access to the answers performed better on the tests than participants who didn’t – even though both groups of participants were selected at random from students at the same university, so were, on average, of similar ability.  (We can’t know for sure who was cheating – probably some of the people who had answers would have got high scores even without the answers – but it means that the average performance in the group was partly down to individual smarts, and partly down to having the answers at hand.)

The crucial question for Chance’s research was this: did people in the “cheater” group know that they’d been relying on the answers? Or did they attribute their success in the tests solely to their own intelligence?

The way the researchers tested this was to ask the students to predict how well they’d do on a follow-up test. They were allowed to quickly glance over the second test sheet so that they could see that it involved the same kind of questions – and, importantly, that no answers had been mistakenly been printed at the bottom this time. The researchers reasoned that if the students who had cheated realised that cheating wasn’t an option the second time around, they should predict they wouldn’t do as well on this second test.

Not so. Self-deception won the day. The people who’d had access to the answers predicted, on average, that they’d get higher scores on the follow-up – equivalent to giving them something like a 10-point IQ boost. When tested, of course, they scored far lower.

The researchers ran another experiment to check that the effect was really due to the cheaters’ inflated belief in their own abilities. In this experiment, students were offered a cash reward for accurately predicting their scores on the second test. Sure enough, those who had been given the opportunity to cheat overestimated their ability and lost out – earning 20% less than the other students.

The implication is that people in Chance’s experiment – people very much like you and me – had tricked themselves into believing they were smarter than they were. There may be benefits from doing this – confidence, satisfaction, or more easily gaining the trust of others – but there are also certainly disadvantages. Whenever circumstances change and you need to accurately predict how well you’ll do, it can cost to believe you’re better than you are.

That self-deception has its costs has some interesting implications. Morally, most of us would say that self-deception is wrong. But aside from whether self-deception is undesirable, we should expect it to be present in all of us to some degree (because of the benefits), but to be limited as well (because of the costs).

Self-deception isn’t something that is always better in larger doses – there must be an amount of it for which the benefits outweigh the costs, most of the time. We’re probably all self-deceiving to some degree. The irony being, because it is self-deception, we can’t know how often.

This is my BBC Future article from last week. The original is here

The smart unconscious

We feel that we are in control when our brains figure out puzzles or read words, says Tom Stafford, but a new experiment shows just how much work is going on underneath the surface of our conscious minds.

It is a common misconception that we know our own minds. As I move around the world, walking and talking, I experience myself thinking thoughts. “What shall I have for lunch?”, I ask myself. Or I think, “I wonder why she did that?” and try and figure it out. It is natural to assume that this experience of myself is a complete report of my mind. It is natural, but wrong.

There’s an under-mind, all psychologists agree – an unconscious which does a lot of the heavy lifting in the process of thinking. If I ask myself what is the capital of France the answer just comes to mind – Paris! If I decide to wiggle my fingers, they move back and forth in a complex pattern that I didn’t consciously prepare, but which was delivered for my use by the unconscious.

The big debate in psychology is exactly what is done by the unconscious, and what requires conscious thought. Or to use the title of a notable paper on the topic, ‘Is the unconscious smart or dumb?‘ One popular view is that the unconscious can prepare simple stimulus-response actions, deliver basic facts, recognise objects and carry out practised movements. Complex cognition involving planning, logical reasoning and combining ideas, on the other hand, requires conscious thought.

A recent experiment by a team from Israel scores points against this position. Ran Hassin and colleagues used a neat visual trick called Continuous Flash Suppression to put information into participants’ minds without them becoming consciously aware of it. It might sound painful, but in reality it’s actually quite simple. The technique takes advantage of the fact that we have two eyes and our brain usually attempts to fuse the two resulting images into a single coherent view of the world. Continuous Flash Suppression uses light-bending glasses to show people different images in each eye. One eye gets a rapid succession of brightly coloured squares which are so distracting that when genuine information is presented to the other eye, the person is not immediately consciously aware of it. In fact, it can take several seconds for something that is in theory perfectly visible to reach awareness (unless you close one eye to cut out the flashing squares, then you can see the ‘suppressed’ image immediately).

Hassin’s key experiment involved presenting arithmetic questions unconsciously. The questions would be things like “9 – 3 – 4 = ” and they would be followed by the presentation, fully visible, of a target number that the participants were asked to read aloud as quickly as possible. The target number could either be the right answer to the arithmetic question (so, in this case, “2”) or a wrong answer (for instance, “1”). The amazing result is that participants were significantly quicker to read the target number if it was the right answer rather than a wrong one. This shows that the equation had been processed and solved by their minds – even though they had no conscious awareness of it – meaning they were primed to read the right answer quicker than the wrong one.

The result suggests that the unconscious mind has more sophisticated capacities than many have thought. Unlike other tests of non-conscious processing, this wasn’t an automatic response to a stimulus – it required a precise answer following the rules of arithmetic, which you might have assumed would only come with deliberation. The report calls the technique used “a game changer in the study of the unconscious”, arguing that “unconscious processes can perform every fundamental, basic-level function that conscious processes can perform”.

These are strong claims, and the authors acknowledge that there is much work to do as we start to explore the power and reach of our unconscious minds. Like icebergs, most of the operation of our minds remains out of sight. Experiments like this give a glimpse below the surface.

This is my BBC Future column from last week. The original is here

Anti-vax: wrong but not irrational


Since the uptick in outbreaks of measles in the US, those arguing for the right not to vaccinate their children have come under increasing scrutiny. There is no journal of “anti-vax psychology” reporting research on those who advocate what seems like a controversial, “anti-science” and dangerous position, but if there was we can take a good guess at what the research reported therein would say.

Look at other groups who hold beliefs at odds with conventional scientific thought. Climate sceptics for example. You might think that climate sceptics would be likely to be more ignorant of science than those who accept the consensus that humans are causing a global increase in temperatures. But you’d be wrong. The individuals with the highest degree of scientific literacy are not those most concerned about climate change, they are the group which is most divided over the issue. The most scientifically literate are also some of the strongest climate sceptics.

A driver of this is a process psychologists have called “biased assimilation” – we all regard new information in the light of what we already believe. In line with this, one study showed that climate sceptics rated newspaper editorials supporting the reality of climate change as less persuasive and less reliable than non-sceptics. Some studies have even shown that people can react to information which is meant to persuade them out of their beliefs by becoming more hardline – the exact opposite of the persuasive intent.

For topics such as climate change or vaccine safety, this can mean that a little scientific education gives you more ways of disagreeing with new information that don’t fit your existing beliefs. So we shouldn’t expect anti-vaxxers to be easily converted by throwing scientific facts about vaccination at them. They are likely to have their own interpretation of the facts.

High trust, low expertise

Some of my own research has looked at who the public trusted to inform them about the risks from pollution. Our finding was that how expert a particular group of people was perceived to be – government, scientists or journalists, say – was a poor predictor of how much they were trusted on the issue. Instead, what was critical was how much they were perceived to have the public’s interests at heart. Groups of people who were perceived to want to act in line with our respondents’ best interests – such as friends and family – were highly trusted, even if their expertise on the issue of pollution was judged as poor.

By implication, we might expect anti-vaxxers to have friends who are also anti-vaxxers (and so reinforce their mistaken beliefs) and to correspondingly have a low belief that pro-vaccine messengers such as scientists, government agencies and journalists have their best interests at heart. The corollary is that no amount of information from these sources – and no matter how persuasive to you and me – will convert anti-vaxxers who have different beliefs about how trustworthy the medical establishment is.

Interestingly, research done by Brendan Nyhan has shown many anti-vaxxers are willing to drop mistaken beliefs about vaccines, but as they do so they also harden in their intentions not to get their kids vaccinated. This shows that the scientific beliefs of people who oppose vaccinations are only part of the issue – facts alone, even if believed, aren’t enough to change people’s views.

Reinforced memories

We know from research on persuasion that mistaken beliefs aren’t easily debunked. Not only is the biased assimilation effect at work here but also the fragility of memory – attempts at debunking myths can serve to reinforce the memory of the myth while the debunking gets forgotten.

The vaccination issue provides a sobering example of this. A single discredited study from 1998 claimed a link between autism and the MMR jab, fuelling the recent distrust of vaccines. No matter how many times we repeat that “the MMR vaccine doesn’t cause autism”, the link between the two is reinforced in people’s perceptions. To avoid reinforcing a myth, you need to provide a plausible alternative – the obvious one here is to replace the negative message “MMR vaccine doesn’t cause autism”, with a positive one. Perhaps “the MMR vaccine protects your child from dangerous diseases”.

Rational selfishness

There are other psychological factors at play in the decisions taken by individual parents not to vaccinate their children. One is the rational selfishness of avoiding risk, or even the discomfort of a momentary jab, by gambling that the herd immunity of everyone else will be enough to protect your child.

Another is our tendency to underplay rare events in our calculation about risks – ironically the very success of vaccination programmes makes the diseases they protect us against rare, meaning that most of us don’t have direct experience of the negative consequences of not vaccinating. Finally, we know that people feel differently about errors of action compared to errors of inaction, even if the consequences are the same.

Many who seek to persuade anti-vaxxers view the issue as a simple one of scientific education. Anti-vaxxers have mistaken the basic facts, the argument goes, so they need to be corrected. This is likely to be ineffective. Anti-vaxxers may be wrong, but don’t call them irrational.

Rather than lacking scientific facts, they lack a trust in the establishments which produce and disseminate science. If you meet an anti-vaxxer, you might have more luck persuading them by trying to explain how you think science works and why you’ve put your trust in what you’ve been told, rather than dismissing their beliefs as irrational.

The Conversation

This article was originally published on The Conversation.
Read the original article.

What gambling monkeys teach us about human rationality

We often make stupid choices when gambling, says Tom Stafford, but if you look at how monkeys act in the same situation, maybe there’s good reason.

When we gamble, something odd and seemingly irrational happens.

It’s called the ‘hot hand’ fallacy – a belief that your luck comes in streaks – and it can lose you a lot of money. Win on roulette and your chances of winning again aren’t more or less – they stay exactly the same. But something in human psychology resists this fact, and people often place money on the premise that streaks of luck will continue – the so called ‘hot hand’.

The opposite superstition is to bet that a streak has to end, in the false belief that independent events of chance must somehow even out. This is known as the gambler’s fallacy, and achieved notoriety at the Casino de Monte-Carlo on 18 August 1913. The ball fell on black 26 times in a row, and as the streak lengthened gamblers lost millions betting on red, believing that the chances changed with the length of the run of blacks.

Why do people act this way time and time again? We can discover intriguing insights, it seems, by recruiting monkeys and getting them to gamble too. If these animals make dumb choices like us, perhaps it could tell us more about ourselves.

First though, let’s look at what makes some games particularly likely to trigger these effects. Many results in games are based on a skill element, so it makes reasonable sense to bet, for instance, that a top striker like Lionel Messi is more likely to score a goal than a low-scoring defender.

Yet plenty of games contain randomness. For truly random events like roulette or the lottery, there is no force which makes clumps more or less likely to continue. Consider coin tosses: if you have tossed 10 heads in a row your chance of throwing another heads is still 50:50 (although, of course, at the point before you’ve thrown any, the overall odds of throwing 10 in a row is still minuscule).

The hot hand and gambler’s fallacies both show that we tend to have an unreasonable faith in the non-randomness of the universe, as if we can’t quite believe that those coins (or roulette wheels, or playing cards) really are due to the same chances on each flip, spin or deal.

It’s a result that sometimes makes us sneer at the irrationality of human psychology. But that conclusion may need revising.

Cross-species gambling

An experiment reported by Tommy Blanchard of the University of Rochester in New York State, and colleagues, shows that monkeys playing a gambling game are swayed by the same hot hand bias as humans. Their experiments involved three monkeys controlling a computer display with their eye-movements – indicating their choices by shifting their gaze left or right. In the experiment they were given two options, only one of which delivered a reward. When the correct option was random – the same 50:50 chance as a coin flip – the monkeys still had a tendency to select the previously winning option, as if luck should continue, clumping together in streaks.

The reason the result is so interesting is that monkeys aren’t taught probability theory as school. They never learn theories of randomness, or pick up complex ideas about chance events. The monkey’s choices must be based on some more primitive instincts about how the world works – they can’t be displaying irrational beliefs about probability, because they cannot have false beliefs, in the way humans can, about how luck works. Yet they show the same bias.

What’s going on, the researchers argue, is that it’s usually beneficial to behave in this manner. In most of life, chains of success or failure are linked for good reason – some days you really do have your eye on your tennis serve, or everything goes wrong with your car on the same day because the mechanics of the parts are connected. In these cases, the events reflect an underlying reality, and one you can take advantage of to predict what happens next. An example that works well for the monkeys is food. Finding high-value morsels like ripe food is a chance event, but also one where each instance isn’t independent. If you find one fruit on a tree the chances are that you’ll find more.

The wider lesson for students of human nature is that we shouldn’t be quick to call behaviours irrational. Sure, belief in the hot hand might make you bet wrong on a series of coin flips, or worse, lose a pot of money. But it may be that across the timespan in evolution, thinking that luck comes in clumps turned out to be useful more often than it was harmful.

This is my BBC Future article from last week. The original is here

Is public opinion rational?

There is no shortage of misconceptions. The British public believes that for every £100 spent on benefits, £24 is claimed fraudulently (the actual figure is £0.70). We think that 31% of the population are immigrants (actually its 13%). One recent headline summed it up: “British Public wrong about nearly everything, and I’d bet good money that it isn’t just the British who are exceptionally misinformed.

This looks like a problem for democracy, which supposes a rational and informed public opinion. But perhaps it isn’t, at least according to a body of political science research neatly summarised by Will Jennings in his chapter of a new book “Sex, lies & the ballot box: 50 things you need to know about British elections“. The book is a collection of accessible essays by British political scientists, and has a far wider scope than the book subtitle implies: there are important morals here for anyone interested in collective human behaviour, not just those interested in elections.

Will’s chapter discusses the “public opinion as thermostat” theory. This, briefly, is that the public can be misinformed about absolute statistics, but we can still change our strength of feeling in an appropriate way. So, for example, we may be misled about the absolute unemployment rate, but can still discern whether unemployment is getting better or worse. There’s evidence to support this view, and the chapter includes this striking graph (reproduced with permission), showing the percentage of people saying “unemployment” is the most important issue facing the country against the actual unemployment rate . As you can see public opinion tracks reality with remarkable accuracy:

Unemployment rate (source: ONS) and share of voters rating unemployment as the most important issue facing the country (source: ipsos-MORI), from Will Jenning's chapter in "Sex, lies & the ballot box" (p.35)
Unemployment rate and share of voters rating unemployment as the most important issue facing the country, from Will Jenning’s chapter in “Sex, lie & the ballot box” (p.35)

The topic of how a biased and misinformed public can make rational collective decisions is a fascinating one, which has received attention from disciplines ranging from psychology to political science. I’m looking forward to reading the rest of the book to get more evidence based insights into how our psychological biases play out when decision making is at the collective level of elections.

Full disclosure: Will is a friend of mine and sent me a free copy of the book.

Link: “Sex, lies & the ballot box (Edited by Philip Cowley & Robert Ford).

Link: Guardian data blog Five things we can learn from Sex, Lies and the Ballot Box

Implicit racism in academia

teacher-309533_640Subtle racism is prevalent in US and UK universities, according to a new paper commissioned by the Leadership Foundation for Higher Education and released last week, reports The Times Higher Education.

Black professors surveyed for the paper said they were treated differently than white colleagues in the form of receiving less eye contact or requests for their opinion, that they felt excluded in meetings and experienced undermining of their work. “I have to downplay my achievements sometimes to be accepted” said one academic, explaining that colleagues that didn’t expect a black woman to be clever and articulate. Senior managers often dismiss racist incidents as conflicts of personalities or believe them to be exaggerated, found the paper.

And all this in institutions where almost all staff would say they are not just “not racist” but where many would say they were actively committed to fighting prejudice.

This seems like a clear case of the operation of implicit biases – where there is a contradiction between people’s egalitarian beliefs and their racist actions. Implicit biases are an industry in psychology, where tools such as the implicit association test (IAT) are used to measure them. The IAT is a fairly typical cognitive psychology-type study: individuals sit in front of a computer and the speed of their reactions to stimuli are measured (the stimuli are things like faces of people with different ethnicities, which is how we get out a measure of implicit prejudice).

The LFHE paper is a nice opportunity to connect this lab measure with the reality of implicit bias ‘in the wild’. In particular, along with some colleagues, I have been interested in exactly what an implicit bias, is, psychologically.

Commonly, implicit biases are described as if they are unconscious or somehow outside of the awareness of those holding them. Unfortunately, this hasn’t been shown to be the case (in fact the opposite may be true – there’s some evidence that people can predict their IAT scores fairly accurately). Worse, the very idea of being unaware of a bias is badly specified. Does ‘unaware’ mean you aren’t aware of your racist feelings? Of your racist behaviour? Of that the feelings, in this case, have produced the behaviour?

The racist behaviours reported in the paper – avoiding eye-contact, assuming that discrimination is due to personalities and not race, etc – could all work at any or all of these levels of awareness. Although the behaviours are subtle, and contradict people’s expressed, anti-racist, opinions, the white academics could still be completely aware. They could know that black academics make them feel awkward or argumentative, and know that this is due to their race. Or they could be completely unaware. They could know that they don’t trust the opinions of certain academics, for example, but not realise that race is a factor in why they feel this way.

Just because the behaviour is subtle, or the psychological phenomenon is called ‘implicit’, doesn’t mean we can be certain about what people really know about it. The real value in the notion of implicit bias is that it reminds us that prejudice can exist in how we behave, not just in what we say and believe.

Full disclosure: I am funded by the Leverhulme Trust to work on project looking at the philosophy and psychology of implicit bias . This post is cross-posted on the project blog. Run your own IAT with our open-source code: Open-IAT!

Why bad news dominates the headlines

Why are newspapers and TV broadcasts filled with disaster, corruption and incompetence? It may be because we’re drawn to depressing stories without realising, says psychologist Tom Stafford.

When you read the news, sometimes it can feel like the only things reported are terrible, depressing events. Why does the media concentrate on the bad things in life, rather than the good? And what might this depressing slant say about us, the audience?

It isn’t that these are the only things that happen. Perhaps journalists are drawn to reporting bad news because sudden disaster is more compelling than slow improvements. Or it could be that newsgatherers believe that cynical reports of corrupt politicians or unfortunate events make for simpler stories. But another strong possibility is that we, the readers or viewers, have trained journalists to focus on these things. Many people often say that they would prefer good news: but is that actually true?

To explore this possibility, researchers Marc Trussler and Stuart Soroka, set up an experiment, run at McGill University in Canada. They were dissatisfied with previous research on how people relate to the news – either the studies were uncontrolled (letting people browse news at home, for example, where you can’t even tell who is using the computer), or they were unrealistic (inviting them to select stories in the lab, where every participant knew their choices would be closely watched by the experimenter). So, the team decided to try a new strategy: deception.


Trick question

Trussler and Soroka invited participants from their university to come to the lab for “a study of eye tracking”. The volunteers were first asked to select some stories about politics to read from a news website so that a camera could make some baseline eye-tracking measures. It was important, they were told, that they actually read the articles, so the right measurements could be prepared, but it didn’t matter what they read.

After this ‘preparation’ phase, they watched a short video (the main purpose of the experiment as far as the subjects were concerned, but it was in fact just a filler task), and then they answered questions on the kind of political news they would like to read.

The results of the experiment, as well as the stories that were read most, were somewhat depressing. Participants often chose stories with a negative tone – corruption, set-backs, hypocrisy and so on – rather than neutral or positive stories. People who were more interested in current affairs and politics were particularly likely to choose the bad news.

And yet when asked, these people said they preferred good news. On average, they said that the media was too focussed on negative stories.


Danger reaction

The researchers present their experiment as solid evidence of a so called “negativity bias“, psychologists’ term for our collective hunger to hear, and remember bad news.

It isn’t just schadenfreude, the theory goes, but that we’ve evolved to react quickly to potential threats. Bad news could be a signal that we need to change what we’re doing to avoid danger.

As you’d expect from this theory, there’s some evidence that people respond quicker to negative words. In lab experiments, flash the word “cancer”, “bomb” or “war” up at someone and they can hit a button in response quicker than if that word is “baby”, “smile” or “fun” (despite these pleasant words being slightly more common). We are also able to recognise negative words faster than positive words, and even tell that a word is going to be unpleasant before we can tell exactly what the word is going to be.

So is our vigilance for threats the only way to explain our predilection for bad news? Perhaps not.

There’s another interpretation that Trussler and Soroka put on their evidence: we pay attention to bad news, because on the whole, we think the world is rosier than it actually is. When it comes to our own lives, most of us believe we’re better than average, and that, like the clichés, we expect things to be all right in the end. This pleasant view of the world makes bad news all the more surprising and salient. It is only against a light background that the dark spots are highlighted.

So our attraction to bad news may be more complex than just journalistic cynicism or a hunger springing from the darkness within.

And that, on another bad news day, gives me a little bit of hope for humanity.