I only read it for the articles

The Economist has a delightful article on how we self-justify our dubious behaviour after the event using spurious reasons. It turns out we often deceive ourselves into believing that our hastily constructed justifications are genuinely what motivated us.

The article riffs on a recent study by marketing researchers Zoë Chance and Michael Norton, who asked male students to choose between two specially created sports magazines.

One had more articles, but the other featured more sports. When a participant was asked to rate a magazine, one of two magazines happened to be a special swimsuit issue, featuring beautiful women in bikinis.

When the swimsuit issue was the magazine with more articles, the guys said they valued having more articles to read and chose that one. When the bikini babes appeared in the publication with more sports, they said wider coverage was more important and chose that issue.

This, as it turns out, is a common pattern in studies of this kind, and crucially, participants are usually completely unaware that they are post-justifying their choices.

This may not seem surprising: the joke about reading Playboy for the articles is so old Ms Chance and Mr Norton borrowed it for the title of their working paper. But it is the latest in a series of experiments exploring how people behave in ways they think might be frowned upon, and then explain how their motives are actually squeaky clean. Managers, for example, have been found to favour male applicants at hypothetical job interviews by claiming that they were searching for a candidate with either greater education or greater experience, depending on the attribute with which the man could trump the woman. In another experiment, people chose to watch a movie in a room already occupied by a person in a wheelchair when an adjoining room was showing the same film, but decamped when the movie in the next room was different (thus being able to claim that they were not avoiding the disabled person but just choosing a different film to watch). As Ms Chance puts it: “People will do what they want to do, and then find reasons to support it.”

Further compounding the problem, Ms Chance and Mr Norton’s subjects, like the subjects of the similar experiments, showed little sign of being aware that they were merely using a socially acceptable justification to look at women in swimsuits. Mr Norton reports that when he informs participants that they were acting for different reasons than they claimed, they often react with disbelief.

I recommend reading the original study. It’s very accessibly written, and if you read nothing else, skip to page 9 (page 10 of the pdf file) and read the section entitled ‘Are People Aware That They are Justifying?’.

One of the key insights from psychology and one of the most practically applicable findings (particularly in clinical work) is that people’s explanations for why they do something are not necessarily a reliable guide to what influences their behaviour.

This also goes for ourselves and there are probably many areas in our life where we justify our actions, good or bad, with comfortable, plausible, fantasies.

 
Link to Economist piece ‘The conceit of deceit’.
Link to study text.

Empty glass, empty promise

Photo by Flickr user DeeJayTee23. Click for sourceThere’s a neat study in the August edition of the Journal of Abnormal Psychology on how alcohol can make us feel fully committed to goals we know we have no chance of achieving.

Alcohol breeds empty goal commitments

J Abnorm Psychol. 2009 Aug;118(3):623-33.

Sevincer AT, Oettingen G.

According to alcohol-myopia theory (C. M. Steele & R. A. Josephs, 1990), alcohol leads individuals to disproportionally focus on the most salient aspects of a situation and to ignore peripheral information. The authors hypothesized that alcohol leads individuals to strongly commit to their goals without considering information about the probability of goal attainment. In Study 1, participants named their most important interpersonal goal, indicated their expectations of successfully attaining it, and then consumed either alcohol or a placebo. In contrast to participants who consumed a placebo, intoxicated participants felt strongly committed to their goals despite low expectations of attaining them. In Study 2, goal-directed actions were measured over time. Once sober again, intoxicated participants with low expectations did not follow up on their strong commitments. Apparently, when prospects are bleak, alcohol produces empty goal commitments, as commitments are not based on individuals’ expectations of attaining their goals and do not foster goal striving over time.

Link to PubMed entry for study.

Through gritted teeth

Photo by Flickr user blmurch. Click for sourceThere’s an excellent article in the Boston Globe about ‘grit’ – the ability to stick with a task and persevere over a long period even when the going gets tough.

The article riffs on the work of psychologist Angela Duckworth who became interested in what attributes outside of intelligence contribute to success.

‚ÄúI‚Äôd bet that there isn‚Äôt a single highly successful person who hasn‚Äôt depended on grit,‚Äù says Angela Duckworth, a psychologist at the University of Pennsylvania who helped pioneer the study of grit. ‚ÄúNobody is talented enough to not have to work hard, and that‚Äôs what grit allows you to do.‚Äù…

After developing a survey to measure this narrowly defined trait – you can take the survey at www.gritstudy.com – Duckworth set out to test the relevance of grit. The initial evidence suggests that measurements of grit can often be just as predictive of success, if not more, than measurements of intelligence. For instance, in a 2007 study of 175 finalists in the Scripps National Spelling Bee, Duckworth found that her simple grit survey was better at predicting whether or not a child would make the final round than an IQ score.

As the article notes, this concept of grit is not just perseverance, it’s also about keeping relevant long-term goals in mind.

When psychologists have researched ‘goal-directed action’ in the past, they’ve almost always been thinking about the here and now. Reaching, immediate problem solving and short-term achievement.

This is slowly starting to change and some cognitive scientists are now attempting to understand the psychology and neuroscience of what we might call ‘life goals’.

There’s an interesting neuroimaging study in the latest issue of the Journal of Cognitive Neuroscience that looked which brain areas are active when we’re thinking about future events that are not personally relevant, compared to those that the individual holds as a personal goal.

The study extends previous work that indicates that our ability to imagine the future uses similar brain networks as our ability to remember the past, to the point where patients with dense amnesia have drastic impairments in picturing future events.

In the case of personal goals, it seems a similar network is involved, with the addition of the ventromedial and posterior cingulate areas, both frontal lobe regions previously linked to coding the emotional weight or value of an experience.

I’ve long suspected that 90% of real-world intelligence is motivation and a similar message seems to be emerging from the research.

Link to Boston Globe article ‘The truth about grit’.

Unique like everyone else

Photo by Flickr user victoriapeckham. Click for sourceYou’ve probably heard of the many cognitive bias studies where the vast majority of people rate themselves as among the best. Like the fact that 88% of college students rate themselves in the top 50% of drivers, 95% of college professors think they do above average work, and so on.

In light of this, I’ve just found a wonderfully ironic study that found that the majority of people rate themselves as less susceptible to cognitive biases than the average person.

It’s work from psychologist Emily Pronin who studies insight into our own judgements and how it affects our social understanding and perception of others.

In this study, the participants (psychology students no less), were given a booklet explaining how cognitive biases work that described eight of the most common ones. They were then asked to rate how susceptible they were to each of the biases and then how susceptible the ‘average American’ was.

Each rated themselves as less affected by biases than other people, instantly causing an irony loop in the fabric of space and time.

The study also had a fantastic follow-up that demonstrated just how strongly these cognitive biases affect our thinking. Even when they’re pointed out, we can’t escape them:

Participants in one follow-up study who showed the better than-average bias insisted that their self-assessments were accurate and objective even after reading a description of how they could have been affected by the relevant bias.

Participants in a final study reported their peer’s self-serving attributions regarding test performance to be biased but their own similarly self-serving attributions to be free of bias.

Pronin calls this the ‘bias blind spot’ and you can read the full study online as a pdf file. Pronin also wrote an excellent 2008 review, also available as a pdf, on how these biases mean we see ourselves differently from how we see others, because we have direct access to our own minds but only observations of other people.

pdf of ‘bias blind spot’ study.
Link to DOI entry for same.

Out of control decision-making

I’ve just noticed that TED has recently put another talk online by the entertaining and thought-provoking behavioural economist Dan Ariely where he discusses why our feeling of being in total control of our decision-making may be false.

We mentioned an earlier and similarly interesting TED talk on the psychology of cheating previously, but this one is more concerned with what we might call decision-making inertia, where the ‘default’ options or red herrings have a huge sway over our reasoning

This is despite the fact that most people are completely unaware of how irrelevant information has such a profound impact on our choices.

Link to Dan Ariely TED talk on whether we’re in control of our choices.

The psychology of being scammed

Photo by Flickr user wootam!. Click for sourceI’m just reading a fascinating report on the psychology of why people fall for scams, commissioned by the UK government’s Office of Fair Trading and created by Exeter University’s psychology department.

It’s a 260 page monster, so is not exactly bed time reading, but was drawn from in-depth interviews from scam victims, examination of scam material, two questionnaire studies and a behavioural experiment.

Here’s some of the punchlines grabbed from the executive summary. The report concluded that the most successful scams involve:

Appeals to trust and authority: people tend to obey authorities so scammers use, and victims fall for, cues that make the offer look like a legitimate one being made by a reliable official institution or established reputable business.

Visceral triggers: scams exploit basic human desires and needs – such as greed, fear, avoidance of physical pain, or the desire to be liked – in order to provoke intuitive reactions and reduce the motivation of people to process the content of the scam message deeply.

Scarcity cues. Scams are often personalised to create the impression that the offer is unique to the recipient.

Induction of behavioural commitment. Scammers ask their potential victims to make small steps of compliance to draw them in, and thereby cause victims to feel committed to continue sending money.

The disproportionate relation between the size of the alleged reward and the cost of trying to obtain it. Scam victims are led to focus on the alleged big prize or reward in comparison to the relatively small amount of money they have to send in order to obtain their windfall.

Lack of emotional control. Compared to non-victims, scam victims report being less able to regulate and resist emotions associated with scam offers. They seem to be unduly open to persuasion, or perhaps unduly undiscriminating about who they allow to persuade them.

And here’s a couple of counter-intuitive kickers:

Scam victims often have better than average background knowledge in the area of the scam content. For example, it seems that people with experience of playing legitimate prize draws and lotteries are more likely to fall for a scam in this area than people with less knowledge and experience in this field. This also applies to those with some knowledge of investments. Such knowledge can increase rather than decrease the risk of becoming a victim.

Scam victims report that they put more cognitive effort into analysing scam content than non-victims. This contradicts the intuitive suggestion that people fall victim to scams because they invest too little cognitive energy in investigating their content, and thus overlook potential information that might betray the scam.

Interesting, people who fall for scams often have a feeling that it’s dodgy. The report suggests we trust our gut instincts. If it seems too good to be true, it probably is.

We like to think that only other people fall for scams, but as I’m working my way through the report it’s becoming clear that those things that we think make us resistant to scams (a keen analytical mind) are not what help us avoid being a victim.

A really fascinating read and a great example of applied psychology.

Link to Office of Fair Trading report page and download.

Delayed gratification and the science of self-control

The New Yorker has a fantastic article on the psychology of delayed gratification and how tempting kids with marshmallows allowed us to understand the life-time impact of self-control.

The piece focuses on the work of psychologist Walter Mischel who invented a test for children where they’d be presented with a marshmallow but told they could have two, later on, if they just waited.

It was an early demonstration of the power of temporal discounting – some kids ate the marshmallow, about a third waited and cashed in their patience for bigger rewards – but this wasn’t, in itself, particularly earth-shattering news.

What was most surprising was that years later, when Mischel followed up the kids in his experiment, the ones who waited, who could delay their gratification, turned out to be more successful in life – better jobs, better exam results, less drug addiction and so on.

This and subsequent research has led us to believe that the ability to delay gratification for better rewards in the future is a fundamental skill in success, probably because it looks at how emotions and motivations interact with a more rational appproach to reasoning. We know what’s best, but can we keep temptation at bay to reach it?

The article is a compelling exploration of this key ability and the subsequent research that has sprung up around it to help explain how we manage to keep those cheap instant hits at bay.

There’s also a great observation in the piece where the author, science writer Jonah Lehrer, describes Mischel as someone who “talks with a Brooklyn bluster and he tends to act out his sentences”.

Link to New Yorker article ‘Don‚Äôt! The secret of self-control’.

Choice blindness

New Scientist has a fascinating article on some ‘I wish I’d thought of that’ research that looks at how we justify our choices, even when the thing we’ve chosen has been unknowingly swapped. It turns out, most of the time we don’t notice the change and precede to give reasons for why the thing we didn’t choose was the best choice.

It’s a fantastic use of stage magician’s sleight of hand to make a change outside conscious awareness.

We have been trying to answer this question using techniques from magic performances. Rather than playing tricks with alternatives presented to participants, we surreptitiously altered the outcomes of their choices, and recorded how they react. For example, in an early study we showed our volunteers pairs of pictures of faces and asked them to choose the most attractive. In some trials, immediately after they made their choice, we asked people to explain the reasons behind their choices.

Unknown to them, we sometimes used a double-card magic trick to covertly exchange one face for the other so they ended up with the face they did not choose. Common sense dictates that all of us would notice such a big change in the outcome of a choice. But the result showed that in 75 per cent of the trials our participants were blind to the mismatch, even offering “reasons” for their “choice”.

The idea riffs on the well-known psychological phenomenon of change blindness but this is also a lovely example of what Daniel Dennett called “narratization”, the ability of the mind to make a coherent story out what’s happening, with you as the main character, even when it’s clear that the outcome was determined externally. In a well-known article, Dennett cites this process as the key to our understanding of the ‘self’.

This was vividly demonstrated in split-brain patients who can be shown images to each independent hemisphere.

Each hand picks out a different picture, because the information is only accessible to the side that controls action for one side of the body, but when asked why they chose the two, they give a story of why the two pictures are related, even though they’re not conscious of initially seeing both pictures.

There’s a great summary in this New York Times piece from 2005, that comes highly recommended.

The New Scientist article covers this new technique for investigating this process with a nifty video of the slight-of-hand in action.

Link to NewSci on ‘Choice blindness: You don’t know what you want’.

Bias we can believe in

Time magazine has a recent article on how the Obama team are making behavioural economics the centre of their financial policies in the hope of altering the behaviour of US citizens. But where are the sceptical voices?

Behavioural economics is primarily an academic discipline where researchers investigate how our cognitive biases divert us from strictly rational reasoning and affect our financial decision-making.

More recently, however, researchers have started touting these findings as a basis for making financial policy. This was most conspicuously done in Richard Thaler and Cass Sunstein’s book Nudge, which the article notes was an inspiration for the Obama campaign.

In fact, the article reveals that several well-known behavioural economists were advisors for the Obama campaign team:

The existence of this behavioral dream team ‚Äî which also included best-selling authors Dan Ariely of MIT (Predictably Irrational) and Richard Thaler and Cass Sunstein of the University of Chicago (Nudge) as well as Nobel laureate Daniel Kahneman of Princeton ‚Äî has never been publicly disclosed, even though its members gave Obama white papers on messaging, fundraising and rumor control as well as voter mobilization. All their proposals ‚Äî among them the famous online fundraising lotteries that gave small donors a chance to win face time with Obama ‚Äî came with footnotes to peer-reviewed academic research. “It was amazing to have these bullet points telling us what to do and the science behind it,” Moffo tells TIME. “These guys really know what makes people tick.”

President Obama is still relying on behavioral science. But now his Administration is using it to try to transform the country. Because when you know what makes people tick, it’s a lot easier to help them change.

And the fact that Obama has picked behavioural economist and Nudge co-author Cass Sunstein to head up the policy tweaking ‘Office of Information and Regulatory Affairs’ office is evidence that behavioural science is being taken seriously in the new administration.

What’s remarkable, however, is how so few of the high profile stories on this new influence in the Obama team has asked any difficult questions – they’re all almost relentlessly enthusiastic.

For example, one major problem is knowing how well largely lab-based studies will scale up to whole-population economic systems.

It’s perhaps no accident that almost all the articles cite a 2001 study that found that simply making the US’s 401(k) retirement savings scheme opt-out instead of opt-in vastly increased participation simply because it’s a hassle to change and employees perceive the ‘default’ as investment advice.

But it’s probably true to say that this example has been so widely repeated but it’s one of the minority of behavioural economics studies that have looked at the relation between the existence of a cognitive bias and real-world economic data from the population.

And it’s notable that behavioural economists who specialise in making this link, a field they call behavioural macroeconomics, seem absent from the Obama inner circle.

In fact, the two most prominent, George Akerlof and Bob Shiller, are certainly guys worth listening to.

Akerlof won the Nobel prize in economics for his work on behavioural macroeconomics and Shiller has predicted the tech crash in his 2000 book Irrational Exuberance, and then the housing crash in the second edition.

It is essential to check lab findings against real-world economic data because the responses of small groups of undergraduates should not be the basis of economic policy.

Link to Time article ‘How Obama is Using the Science of Change’.
Link to Economist on behavioural macroeconomics book by Akerlof and Shiller.
Link to Atlantic interview with Akerlof on behaviour and economic policy.
Link to Atlantic interview with Shiller on the same.

Dan Ariely on the psychology of cheating

Behavioural economist Dan Ariely gives a fantastic 15 minute TED lecture on the psychology of cheating that explores numerous fascinating and counter-intuitive influences on how we bend the truth for personal benefit.

Ariely discusses some curious social influences, including the fact that seeing someone else cheat may actually decrease the general cheating of the group, but only if they perceive they are part of a different or rival group. Seeing someone cheat who is part of your ‘in-group’ seems to reliably increase dishonesty.

He also notes various effects of changing the form of the benefit. Simply making the reward tokens that can be exchanged for money, rather than just directly paying, greatly increases cheating, even though the value is identical in both cases.

Ariel does some fascinating research and is the author of Predictably Irrational, an excellent book which I thoroughly recommend.

The talk is similarly enjoyable and Ariely makes links between his own studies on cheating and the current financial meltdown.

Link to Dan Ariely’s TED talk.

Where is my mind?

Fora.TV has a great video discussion with science writer Jonah Lehrer where he gives a wonderfully engaging talk on the on decision making, meta-cognition and the paradox of choice.

The discussion is an hour long and well worth the time, although for those with pathological impatience or only five minutes to spare, the section on metacognition is a particular highlight.

I also notice from his blog that he’s also just reviewed a recent book on consciousness and embodied cognition called ‘Out of Our Heads’ by philosopher Alva No√´ for the San Francisco Chronicle which is also worth checking out.

Link to Fora.TV interview with Lehrer (thanks Rich!)

Experimental philosophy of others’ intentions

Photo by Flickt user nick russill. Click for sourceToday’s ABC Radio National All in the Mind has a fascinating discussion on how we attribute intentions to other people which covers some surprising and counter-intuitive examples of how our understanding of other people’s desires are biased by the situation.

There’s a great example depicted in this YouTube video which I highly recommend, but essentially the example is this:

A vice president of a large company goes to the CEO and says “We have a new business plan. It will make huge amounts of money for the company, but it will also harm the environment”.

The CEO says “I know the plan will harm the environment, but I don’t care about that, I’m just interested in making as much money as we possibly can. So let’s put the plan into action”.

The company starts the plan, and the environment is harmed.

The question is, did the CEO harm the environment intentionally? As it turns out, most people say yes to this question.

Now have a think about this similar scenario.

A vice president of a large company goes to the CEO and says “We have a new business plan. It will make huge amounts of money for the company, but it will also help the environment”.

The CEO says “I know the plan will help the environment, but I don’t care about that, I’m just interested in making as much money as we possibly can. So let’s put the plan into action”.

The company starts the plan, and the environment is helped.

The question is the same – did the CEO intentionally help the environment in this case.

Curiously, most people say no. Despite the CEO making the same decision in both cases.

The programme is full of many more fascinating examples of how our judgement of intention is affected by the outcome rather than the decision the person makes.

However, I wonder whether our judgements are clouded by the notion of responsibility rather than purely intention, where we place much greater social weight on responsibility for damaging actions, than beneficial ones.

This area is largely being explored by the new area of ‘experimental philosophy‘ that aims to empirically test our assumptions about traditionally philosophical issues.

Link to AITM on ‘The philosophy of good intentions’.

If It’s Difficult to Pronounce, It Must Be Risky

I’ve just found a short-but-sweet study recently published in Psychological Science that shows that we tend to rate things with difficult to pronounce names as more risky than those with names that we can say more fluently.

Psychologists Hyunjin Song and Norbert Schwarz created names of notional food additives and asked the participants to rate how hazardous they seemed.

Easy to pronounce ‘additives’ with names like Magnalroxate were consistently rated as less risky than names such as Hnegripitrom.

Wanting to see whether the same effect held for risks that could be seen as exciting, they ran a similar experiment but where participants were asked to rate amusement park rides.

Rides with names like Ohanzee were rated as less likely to make you sick than difficult-to-pronounce rides with names like Tsiischili, but were also rated as less adventurous.

The researchers note that their study is in line with previous research on cognitive biases, which has found that we tend to underestimate the risk of familiar things and over-estimate the risk of things we don’t know so well.

Link to PubMed entry for study.

Never mind the quality, look at the width

Image by Flickr user Scott Robinson. Click for SourceThe New York Times has a fascinating snippet on how cooperation with others to get a monetary reward is not influenced by the value of the reward, but by the numbers that describe it.

In the study, when the reward was described as rising from 3 cents to 300 cents cooperation increased – but when it was described as rising from 3 cents to 3 dollars, it had no effect.

The experiment was carried by psychologists Ellen Furlong and John Opfer who were interested in comparing how our reasoning is affected by the representation of value.

The researchers asked volunteers to take part in a behavioral test known as the prisoner’s dilemma, in which two partners are offered various rewards to either work together or defect.

The idea is that in the long term, the participants earn the most money by cooperating. But in any given round of play, they make the most if they decide to turn against their partner while he stays loyal. (The reward is lowest when both partners defect.)

When the reward for cooperation was increased to 300 cents from 3 cents, the researchers found, the level of cooperation went up. But when the reward went from 3 cents to $3, it did not.

We covered a study late last year that also found a similar effect: people were swayed more by higher numbers in adverts even when the alternative described exactly the same thing but using smaller units.

Link to short NYT piece ‘$1? No Thanks. 100 Cents? You Bet’.
Link to academic article on study.
Link to DOI entry for same.

Irrational reading

Science writer Jonah Lehrer has a short but useful piece in the Wall Street Journal where he recommends five must-read books on irrational decision-making.

Lehrer is well placed to be making recommendations as he’s recently been completely immersed in the science of decision-making to write his newly released book How We Decide.

The five books he recommends are:

Extraordinary Popular Delusions and the Madness of Crowds by Charles Mackay.

Judgment Under Uncertainty by Daniel Kahneman, Paul Slovic and Amos Tversky

How We Know What Isn’t So by Thomas Gilovich

The Winner’s Curse by Richard H. Thaler

Predictably Irrational by Dan Ariely

All of which I can also heartily recommend, except The Winner’s Curse, but simply because I’m not familiar with it.

By the way, the first book that Lehrer recommends was published in 1841 and is freely available online.

Link to ‘Books on Irrational Decision-Making’ from the WSJ (via FC).

How psychiatrists think

Photo by Flickr user Felipe Ven√¢ncio. Click for sourceAn article just published in Advances in Psychiatric Treatment called ‘How Psychiatrists Think’ discusses how mental health physicians are susceptible to cognitive biases and how it’s possible to reduce the chance of error.

The article was inspired by a Jerome Groopman book we discussed in 2007 called How Doctors Think in which he tackles cognitive errors in medicine but omitted psychiatrists because he felt their thinking process were too complex.

Two psychiatrists, Niall Crumlish and Brendan D. Kelly, decided to take this as a challenge and wrote an article that applied the cognitive science of ‘heuristics‘ to psychiatric reasoning.

Heuristics are mental short-cuts we make to deal with everyday reasoning, and work made famous by Nobel-prize winning psychologist Daniel Kahneman has shown that these short cuts often lead us astray.

For example, the availability heuristic is where we judge likelihood on how easily something comes to mind – perhaps nudging psychiatrists towards incorrectly diagnosing a rare disorder if they’ve just been to a recent discussion on it.

The authors make the point that although they discuss how general reasoning biases applies equally to psychiatric decision-making, almost no experimental work has been done specifically on psychiatrists, meaning we’re still not exactly sure whether there are any speciality-specific mental errors that might regularly crop up.

However, they do note that there’s good evidence that being aware of these biases helps people overcome them.

Their article is a brief guide to some of the most common cognitive biases in us all, with an interesting insight into psychiatric thinking.

Link to ‘How psychiatrists think’.
Link to DOI entry for same.