Irrational? Decisions and decision making in context

IMG_0034Nassim Nicholas Taleb, author of Fooled by Randomness:

Finally put my finger on what is wrong with the common belief in psychological findings that people “irrationally” overestimate tail probabilities, calling it a “bias”. Simply, these experimenters assume that people make a single decision in their lifetime! The entire field of psychology of decisions missed the point.

His argument seems to be that risks seem different if you view them from a lifetime perspective, where you might make choices about the same risk again and again, rather than consider as one-offs. What might be a mistake for a one-off risk could be a sensible strategy for the same risk repeated in a larger set.

He goes on to take a swipe at ‘Nudges’, the idea that you can base policies around various phenomena from the psychology of decision making. “Clearly”, he adds, “psychologists do not know how to use ‘probability'”.

This is maddeningly ignorant, but does have a grain of truth to it. The major part of the psychology of decision making is understanding why things that look like bias or error exist. If a phenomenon, such as overestimating low probability events, is pervasive, it must be for a reason. A choice that looks irrational when considered on its own might be the result of a sensible strategy when considered over a lifetime, or even over evolutionary time.

Some great research in decision making tries to go beyond simple bias phenomenon and ask what underlying choice is being optimised by our cognitive architecture. This approach gives us the Simple Heuristics Which Make Us Smart of Gerd Gigerenzer (which Taleb definitely knows about since he was a visiting fellow in Gigerenzer’s lab), as well as work which shows that people estimate risks differently if they experience the outcomes rather than being told about them, work which shows that our perceptual-motor system (which is often characterised as an optimal decision maker) has the same amount of bias as our more cognitive decisions; and work which shows that other animals, with less cognitive/representational capacity, make analogues of many classic decision making errors. This is where the interesting work in decision making is happening, and it all very much takes account of the wider context of individual decisions. So saying that the entire field missed the point seems…odd.

But the grain of truth the accusation is that the psychology of decision making has been popularised in a way that focusses on one-off decisions. The nudges of behavioural economics tend to be drammatic examples of small interventions which have large effects in one-off measures, such as giving people smaller plates makes them eat less. The problem with these interventions is that even if they work in the lab, they tend not to work long-term outside the lab. People are often doing what they do for a reason – and if you don’t affect the reasons you get the old behaviour reasserting itself as people simply adapt to any nudge you’ve introduced Although the British government is noted for introducing a ‘Nudge Unit‘ to apply behavioural science in government policies, less well known is a House of Lords Science and Technology Committee report ‘Behavioural Change’, which highlights the limitations of this approach (and is well worth reading to get an idea of the the importance of ideas beyond ‘nudging’ in behavioural change).

Taleb is right that we need to drop the idea that biases in decision making automatically attest to our irrationality. As often as not they reflect a deeper rationality in how our minds deal with risk, choice and reward. What’s sad is that he doesn’t recognise how much work on how to better understand bias already exists.

How to formulate a good resolution

We could spend all year living healthier, more productive lives, so why do we only decide to make the change at the start of the year? BBC Future’s psychologist Tom Stafford explains.

Many of us will start 2016 with resolutions – to get fit, learn a new skill, eat differently. If we really want to do these things, why did we wait until an arbitrary date which marks nothing more important than a timekeeping convention? The answer tells us something important about the psychology of motivation, and about what popular theories of self-control miss out.

What we want isn’t straightforward. At bedtime you might want to get up early and go for a run, but when your alarm goes off you find you actually want a lie-in. When exam day comes around you might want to be the kind of person who spent the afternoons studying, but on each of those afternoons you instead wanted to hang out with your friends.

You could see these contradictions as failures of our self-control: impulses for temporary pleasures manage to somehow override our longer-term interests. One fashionable theory of self-control, proposed by Roy Baumeister at Florida State University, is the ‘ego-depletion’ account. This theory states that self-control is like a muscle. This means you can exhaust it in the short-term – meaning that every temptation you resist makes it more likely that you’ll yield to the next temptation, even if it is a temptation to do something entirely different.

Some lab experiments appear to support this limited resource model of willpower. People who had to resist the temptation to eat chocolates were subsequently less successful at solving difficult puzzles which required the willpower to muster up enough concentration to complete them, for instance. Studies of court records, meanwhile, found that the more decisions a parole board judge makes without a meal break, the less lenient they become. Perhaps at the end of a long morning, the self-control necessary for a more deliberated judgement has sapped away, causing them to rely on a harsher “keep them locked up” policy.

A corollary of the ‘like a muscle’ theory is that in the long term, you can strengthen your willpower with practice. So, for example, Baumeister found that people who were assigned two weeks of trying to keep their back straight whenever possible showed improved willpower when asked back into the lab.

Yet the ‘ego-depletion’ theory has critics. My issue with it is that it reduces our willpower to something akin to oil in a tank. Not only does this seem too simplistic, but it sidesteps the core problem of self-control: who or what is controlling who or what? Why is it even the case that we can want both to yield to a temptation, and want to resist it at the same time?

Also, and more importantly, that theory also doesn’t give an explanation why we wait for New Year’s Day to begin exerting our self-control. If your willpower is a muscle, you should start building it up as soon as possible, rather than wait for an arbitrary date.

A battle of wills

Another explanation may answer these questions, although it isn’t as fashionable as ego-depletion. George Ainslie’s book ‘Breakdown of Will‘ puts forward a theory of the self and self-control which uses game theory to explain why we have trouble with our impulses, and why our attempts to control them take the form they do.

Ainslie’s account begins with the idea that we have, within us, a myriad of competing impulses, which exist on different time-scales: the you that wants to stay in bed five more minutes, the you that wants to start the day with a run, the you that wants to be fit for the half-marathon in April. Importantly, the relative power of these impulses changes as they get nearer in time: the early start wins against the lie-in the day before, but it is a different matter at 5am. Ainslie has a detailed account of why this is, and it has some important implications for our self-control.

According to this theory, our preferences are unstable and inconsistent, the product of a war between our competing impulses, good and bad, short and long-term. A New Year’s resolution could therefore be seen as an alliance between these competing motivations, and like any alliance, it can easily fall apart. Addictions are a good example, because the long-term goal (“not to be an alcoholic”) requires the coordination of many small goals (“not to have a drink at 4pm;” “not at 5pm;” “not at 6pm,” and so on), none of which is essential. You can have a drink at 4pm and still be a moderate drinker. You can even have a drink also at 5pm, but somewhere along the line all these small choices add up to a failure to keep to the wider goal. Similarly, if you want to get fit in 2016, you don’t have to go for a jog on 1 January, or even on 2 January, but if you don’t start doing exercise on one particular day then you will never meet your larger goal.

From Ainslie’s perspective willpower is a bargaining game played by the forces within ourselves, and like any conflict of interest, if the boundary between acceptable and unacceptable isn’t clearly defined then small infractions can quickly escalate. For this reason, Ainslie says, resolutions cluster around ‘clean lines’, sharp distinctions around which no quibble is brooked. The line between moderate and problem drinking isn’t clear (and liable to be even less clear around your fourth glass), but the line between teetotal and drinker is crystal.

This is why advice on good habits is often of the form “Do X every day”, and why diets tend to absolutes: “No gluten;” “No dessert;” “Fasting on Tuesdays and Thursdays”. We know that if we leave the interpretation open to doubt, although our intentions are good, we’ll undermine our resolutions when we’re under the influence of our more immediate impulses.

And, so, Ainslie gives us an answer to why our resolutions start on 1 January. The date is completely arbitrary, but it provides a clean line between our old and new selves.

The practical upshot of the theory is that if you make a resolution, you should formulate it so that at every point in time it is absolutely clear whether you are sticking to it or not. The clear lines are arbitrary, but they help the truce between our competing interests hold.

Good luck for your 2016 resolutions!

Web of illusion: how the internet affects our confidence in what we know

The internet can give us the illusion of knowledge, making us think we are smarter than we really are. Fortunately, there may be a cure for our arrogance, writes psychologist Tom Stafford.

The internet has a reputation for harbouring know-it-alls. Commenters on articles, bloggers, even your old school friends on Facebook all seem to swell with confidence in their understanding of exactly how the world works (and they are eager to share that understanding with everyone and anyone who will listen). Now, new research reveals that just having access to the world’s information can induce an illusion of overconfidence in our own wisdom. Fortunately the research also shares clues as to how that overconfidence can be corrected.

Specifically, we are looking at how the internet affects our thinking about what we know, a topic psychologists call metacognition. When you know you are boasting, you are being dishonest, but you haven’t made any actual error in estimating your ability. If you sincerely believe you know more than you do then you have made an error. The research suggests that an illusion of understanding may actually be incredibly common, and that this metacognitive error emerges in new ways in the age of the internet.

In a new paper, Matt Fisher of Yale University, considers a particular type of thinking known as transactive memory, which is the idea that we rely on other people and other parts of the world – books, objects – to remember things for us. If you’ve ever left something you needed for work by the door the night before, then you’ve been using transactive memory.

Part of this phenomenon is the tendency to then confuse what we really know in our personal memories, with what we have easy access to, the knowledge that is readily available in the world, or with which we are merely familiar without actually understanding in depth. It can feel like we understand how a car works, the argument goes, when in fact we are merely familiar with making it work. I press the accelerator and it goes forward, neglecting to realise that I don’t really know how it goes forward.

Fisher and colleagues were interested in how this tendency interacts with the internet age. They asked people to provide answers to factual questions, such as “Why are there time zones?”. Half of the participants were instructed to look up the answers on the internet before answering, half were told not to look up the answers on the internet. Next, all participants were asked how confidently they could explain the answers to a second series of questions (seperate, but also factual, questions such as “Why are cloudy nights warmer?” or “How is vinegar made?”).

Sure enough, people who had just been searching the internet for information were significantly more confident about their understanding of the second set of questions. Follow up studies confirmed that these people really did think the knowledge was theirs: they were still more confident if asked to indicate their response on a scale representing different levels of understanding with pictures of brain-scan activity (a ploy that was meant to emphasise that the information was there, in their heads). The confidence effect even persisted when the control group were provided answer material and the internet-search group were instructed to search for a site containing the exact same answer material. Something about actively searching for information on the internet specifically generated an illusion that the  knowledge was in the participants’ own heads.

If the feeling of controlling information generates overconfidence in our own wisdom, it might seem that the internet is an engine for turning us all into bores. Fortunately another study, also published this year, suggests a partial cure.

Amanda Ferguson of the University of Toronto and colleagues ran a similar study, except the set-up was in reverse: they asked participants to provide answers first and, if they didn’t know them, search the internet afterwards for the correct information (in the control condition participants who said “I don’t know” were let off the hook and just moved on to the next question). In this set up, people with access to the internet were actually less willing to give answers in the first place than people in the no internet condition. For these guys, access to the internet shut them up, rather than encouraging them to claim that they knew it all. Looking more closely at their judgements, it seems the effect wasn’t simply that the fact-checking had undermined their confidence. Those that knew they could fall back on the web to check the correct answer didn’t report feeling less confident within themselves, yet they were still less likely to share the information and show off their knowledge.

So, putting people in a position where they could be fact-checked made them more cautious in their initial claims. The implication I draw from this is that one way of fighting a know-it-all, if you have the energy, is to let them know that they are going to be thoroughly checked on whether they are right or wrong. It might not stop them researching a long answer with the internet, but it should slow them down, and diminish the feeling that just because the internet knows some information, they do to.

It is frequently asked if the internet is changing how we think. The answer, this research shows, is that the internet is giving new fuel to the way we’ve always thought. It can be both a cause of overconfidence,  when we mistake the boundary between what we know and what is available to us over the web, and it can be a cause of uncertainty, when we anticipate that we’ll be fact-checked using the web on the claims we make. Our tendencies to overestimate what we know, to use information that is readily available as a substitute for our own knowledge, and to worry about being caught out are all constants on how we think. The internet slots into this tangled cognitive ecosystem, from which endless new forms evolve.

This is my BBC Future column from earlier this week. The original is here

Conspiracy theory as character flaw

NatureBrainPhilosophy professor Quassim Cassam has a piece in Aeon arguing that conspiracy theorists should be understood in terms of the intellectual vices. It is a dead-end, he says, to try to understand the reasons someone gives for believing a conspiracy theory. Consider someone called Oliver who believes that 9/11 was an inside job:

Usually, when philosophers try to explain why someone believes things (weird or otherwise), they focus on that person’s reasons rather than their character traits. On this view, the way to explain why Oliver believes that 9/11 was an inside job is to identify his reasons for believing this, and the person who is in the best position to tell you his reasons is Oliver. When you explain Oliver’s belief by giving his reasons, you are giving a ‘rationalising explanation’ of his belief.

The problem with this is that rationalising explanations take you only so far. If you ask Oliver why he believes 9/11 was an inside job he will, of course, be only too pleased to give you his reasons: it had to be an inside job, he insists, because aircraft impacts couldn’t have brought down the towers. He is wrong about that, but at any rate that’s his story and he is sticking to it. What he has done, in effect, is to explain one of his questionable beliefs by reference to another no less questionable belief.

So the problem is not their beliefs as such, but why the person came to have the whole set of (misguided) beliefs in the first place. The way to understand conspiracists is in terms of their intellectual character, Cassam argues, the vices and virtues that guide as us as thinking beings.

A problem with this account is that – looking at the current evidence – character flaws don’t seem that strong a predictor of conspiracist beliefs. The contrast is with the factors that have demonstrable influence on people’s unusual beliefs. For example, we know that social influence and common cognitive biases have a large, and measurable, effect on what we believe. The evidence isn’t so good on how intellectual character traits such as closed/open-mindedness, skepticism/gullibility are constituted and might affect conspiracist beliefs. That could be because the personality/character trait approach is inherently limited, or just that there is more work to do. One thing is certain, whatever the intellectual vices are that lead to conspiracy theory beliefs, they are not uncommon. One study suggested that 50% of the public endorse at least one conspiracy theory.

Link : Bad Thinkers by Quassim Cassam

Paper on personality and conspiracy theories: Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs

Paper on widespread endorsement of conspiracy theories: Conspiracy Theories and the Paranoid Style(s) of Mass Opinion

Previously on Mindhacks.com That’s what they want you to believe

And a side note, this view that the problem with conspiracy theorists isn’t the beliefs helps explain why throwing facts at them doesn’t help, better to highlight the fallacies in how they are thinking.

Downsides of being a convincing liar

People who take shortcuts can trick themselves into believing they are smarter than they are, says Tom Stafford, and it comes back to bite them.

Honesty may be the best policy, but lying has its merits – even when we are deceiving ourselves. Numerous studies have shown that those who are practised in the art of self-deception might be more successful in the spheres of sport and business. They might even be happier than people who are always true to themselves. But is there ever a downside to believing our own lies?

An ingenious study by Zoe Chance of Yale University tested the idea, by watching what happens when people cheat on tests.

Chance and colleagues ran experiments which involved asking students to answer IQ and general knowledge questions. Half the participants were given a copy of the test paper which had – apparently in error – been printed with the answers listed at the bottom. This meant they had to resist the temptation to check or improve their answers against the real answers as they went along.

Irresistible shortcut

As you’d expect, some of these participants couldn’t help but cheat. Collectively, the group that had access to the answers performed better on the tests than participants who didn’t – even though both groups of participants were selected at random from students at the same university, so were, on average, of similar ability.  (We can’t know for sure who was cheating – probably some of the people who had answers would have got high scores even without the answers – but it means that the average performance in the group was partly down to individual smarts, and partly down to having the answers at hand.)

The crucial question for Chance’s research was this: did people in the “cheater” group know that they’d been relying on the answers? Or did they attribute their success in the tests solely to their own intelligence?

The way the researchers tested this was to ask the students to predict how well they’d do on a follow-up test. They were allowed to quickly glance over the second test sheet so that they could see that it involved the same kind of questions – and, importantly, that no answers had been mistakenly been printed at the bottom this time. The researchers reasoned that if the students who had cheated realised that cheating wasn’t an option the second time around, they should predict they wouldn’t do as well on this second test.

Not so. Self-deception won the day. The people who’d had access to the answers predicted, on average, that they’d get higher scores on the follow-up – equivalent to giving them something like a 10-point IQ boost. When tested, of course, they scored far lower.

The researchers ran another experiment to check that the effect was really due to the cheaters’ inflated belief in their own abilities. In this experiment, students were offered a cash reward for accurately predicting their scores on the second test. Sure enough, those who had been given the opportunity to cheat overestimated their ability and lost out – earning 20% less than the other students.

The implication is that people in Chance’s experiment – people very much like you and me – had tricked themselves into believing they were smarter than they were. There may be benefits from doing this – confidence, satisfaction, or more easily gaining the trust of others – but there are also certainly disadvantages. Whenever circumstances change and you need to accurately predict how well you’ll do, it can cost to believe you’re better than you are.

That self-deception has its costs has some interesting implications. Morally, most of us would say that self-deception is wrong. But aside from whether self-deception is undesirable, we should expect it to be present in all of us to some degree (because of the benefits), but to be limited as well (because of the costs).

Self-deception isn’t something that is always better in larger doses – there must be an amount of it for which the benefits outweigh the costs, most of the time. We’re probably all self-deceiving to some degree. The irony being, because it is self-deception, we can’t know how often.

This is my BBC Future article from last week. The original is here

The smart unconscious

We feel that we are in control when our brains figure out puzzles or read words, says Tom Stafford, but a new experiment shows just how much work is going on underneath the surface of our conscious minds.

It is a common misconception that we know our own minds. As I move around the world, walking and talking, I experience myself thinking thoughts. “What shall I have for lunch?”, I ask myself. Or I think, “I wonder why she did that?” and try and figure it out. It is natural to assume that this experience of myself is a complete report of my mind. It is natural, but wrong.

There’s an under-mind, all psychologists agree – an unconscious which does a lot of the heavy lifting in the process of thinking. If I ask myself what is the capital of France the answer just comes to mind – Paris! If I decide to wiggle my fingers, they move back and forth in a complex pattern that I didn’t consciously prepare, but which was delivered for my use by the unconscious.

The big debate in psychology is exactly what is done by the unconscious, and what requires conscious thought. Or to use the title of a notable paper on the topic, ‘Is the unconscious smart or dumb?‘ One popular view is that the unconscious can prepare simple stimulus-response actions, deliver basic facts, recognise objects and carry out practised movements. Complex cognition involving planning, logical reasoning and combining ideas, on the other hand, requires conscious thought.

A recent experiment by a team from Israel scores points against this position. Ran Hassin and colleagues used a neat visual trick called Continuous Flash Suppression to put information into participants’ minds without them becoming consciously aware of it. It might sound painful, but in reality it’s actually quite simple. The technique takes advantage of the fact that we have two eyes and our brain usually attempts to fuse the two resulting images into a single coherent view of the world. Continuous Flash Suppression uses light-bending glasses to show people different images in each eye. One eye gets a rapid succession of brightly coloured squares which are so distracting that when genuine information is presented to the other eye, the person is not immediately consciously aware of it. In fact, it can take several seconds for something that is in theory perfectly visible to reach awareness (unless you close one eye to cut out the flashing squares, then you can see the ‘suppressed’ image immediately).

Hassin’s key experiment involved presenting arithmetic questions unconsciously. The questions would be things like “9 – 3 – 4 = ” and they would be followed by the presentation, fully visible, of a target number that the participants were asked to read aloud as quickly as possible. The target number could either be the right answer to the arithmetic question (so, in this case, “2”) or a wrong answer (for instance, “1”). The amazing result is that participants were significantly quicker to read the target number if it was the right answer rather than a wrong one. This shows that the equation had been processed and solved by their minds – even though they had no conscious awareness of it – meaning they were primed to read the right answer quicker than the wrong one.

The result suggests that the unconscious mind has more sophisticated capacities than many have thought. Unlike other tests of non-conscious processing, this wasn’t an automatic response to a stimulus – it required a precise answer following the rules of arithmetic, which you might have assumed would only come with deliberation. The report calls the technique used “a game changer in the study of the unconscious”, arguing that “unconscious processes can perform every fundamental, basic-level function that conscious processes can perform”.

These are strong claims, and the authors acknowledge that there is much work to do as we start to explore the power and reach of our unconscious minds. Like icebergs, most of the operation of our minds remains out of sight. Experiments like this give a glimpse below the surface.

This is my BBC Future column from last week. The original is here

Anti-vax: wrong but not irrational

badge

Since the uptick in outbreaks of measles in the US, those arguing for the right not to vaccinate their children have come under increasing scrutiny. There is no journal of “anti-vax psychology” reporting research on those who advocate what seems like a controversial, “anti-science” and dangerous position, but if there was we can take a good guess at what the research reported therein would say.

Look at other groups who hold beliefs at odds with conventional scientific thought. Climate sceptics for example. You might think that climate sceptics would be likely to be more ignorant of science than those who accept the consensus that humans are causing a global increase in temperatures. But you’d be wrong. The individuals with the highest degree of scientific literacy are not those most concerned about climate change, they are the group which is most divided over the issue. The most scientifically literate are also some of the strongest climate sceptics.

A driver of this is a process psychologists have called “biased assimilation” – we all regard new information in the light of what we already believe. In line with this, one study showed that climate sceptics rated newspaper editorials supporting the reality of climate change as less persuasive and less reliable than non-sceptics. Some studies have even shown that people can react to information which is meant to persuade them out of their beliefs by becoming more hardline – the exact opposite of the persuasive intent.

For topics such as climate change or vaccine safety, this can mean that a little scientific education gives you more ways of disagreeing with new information that don’t fit your existing beliefs. So we shouldn’t expect anti-vaxxers to be easily converted by throwing scientific facts about vaccination at them. They are likely to have their own interpretation of the facts.

High trust, low expertise

Some of my own research has looked at who the public trusted to inform them about the risks from pollution. Our finding was that how expert a particular group of people was perceived to be – government, scientists or journalists, say – was a poor predictor of how much they were trusted on the issue. Instead, what was critical was how much they were perceived to have the public’s interests at heart. Groups of people who were perceived to want to act in line with our respondents’ best interests – such as friends and family – were highly trusted, even if their expertise on the issue of pollution was judged as poor.

By implication, we might expect anti-vaxxers to have friends who are also anti-vaxxers (and so reinforce their mistaken beliefs) and to correspondingly have a low belief that pro-vaccine messengers such as scientists, government agencies and journalists have their best interests at heart. The corollary is that no amount of information from these sources – and no matter how persuasive to you and me – will convert anti-vaxxers who have different beliefs about how trustworthy the medical establishment is.

Interestingly, research done by Brendan Nyhan has shown many anti-vaxxers are willing to drop mistaken beliefs about vaccines, but as they do so they also harden in their intentions not to get their kids vaccinated. This shows that the scientific beliefs of people who oppose vaccinations are only part of the issue – facts alone, even if believed, aren’t enough to change people’s views.

Reinforced memories

We know from research on persuasion that mistaken beliefs aren’t easily debunked. Not only is the biased assimilation effect at work here but also the fragility of memory – attempts at debunking myths can serve to reinforce the memory of the myth while the debunking gets forgotten.

The vaccination issue provides a sobering example of this. A single discredited study from 1998 claimed a link between autism and the MMR jab, fuelling the recent distrust of vaccines. No matter how many times we repeat that “the MMR vaccine doesn’t cause autism”, the link between the two is reinforced in people’s perceptions. To avoid reinforcing a myth, you need to provide a plausible alternative – the obvious one here is to replace the negative message “MMR vaccine doesn’t cause autism”, with a positive one. Perhaps “the MMR vaccine protects your child from dangerous diseases”.

Rational selfishness

There are other psychological factors at play in the decisions taken by individual parents not to vaccinate their children. One is the rational selfishness of avoiding risk, or even the discomfort of a momentary jab, by gambling that the herd immunity of everyone else will be enough to protect your child.

Another is our tendency to underplay rare events in our calculation about risks – ironically the very success of vaccination programmes makes the diseases they protect us against rare, meaning that most of us don’t have direct experience of the negative consequences of not vaccinating. Finally, we know that people feel differently about errors of action compared to errors of inaction, even if the consequences are the same.

Many who seek to persuade anti-vaxxers view the issue as a simple one of scientific education. Anti-vaxxers have mistaken the basic facts, the argument goes, so they need to be corrected. This is likely to be ineffective. Anti-vaxxers may be wrong, but don’t call them irrational.

Rather than lacking scientific facts, they lack a trust in the establishments which produce and disseminate science. If you meet an anti-vaxxer, you might have more luck persuading them by trying to explain how you think science works and why you’ve put your trust in what you’ve been told, rather than dismissing their beliefs as irrational.

The Conversation

This article was originally published on The Conversation.
Read the original article.