Race perception isn’t automatic

Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.

For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.

When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.

Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.

Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.

Powerful force

The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).

Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.

It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.

So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).

Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.

Prescribe it again, Sam

We tend to think of Prozac as the first ‘fashionable’ psychiatric drug but it turns out popular memory is short because a tranquilizer called Miltown hit the big time thirty years before.

This is from a wonderful book called The Age of Anxiety: A History of America’s Turbulent Affair with Tranquilizers by Andrea Tone and it describes how the drug became a Hollywood favourite and even inspired its own cocktails.

Miltown was frequently handed out at parties and premieres, a kind of pharmaceutical appetizer for jittery celebrities. Frances Kaye, a publicity agent, described a movie party she attended at a Palm Springs resort. A live orchestra entertained a thousand-odd guests while a fountain spouted champagne against the backdrop of a desert sky. As partiers circulated, a doctor made rounds like a waiter, dispensing drugs to guests from a bulging sack. On offer were amphetamines and barbituates, standard Hollywood party fare, but guests wanted Miltown. The little white pills “were passed around like peanuts,” Kaye remembered. What she observed about party pill popping was not unique. “They all used to go for ‘up pills’ or ‘down pills,'” one Hollywood regular noted. “But now it’s the ‘don’t-give-a-darn-pills.'”

The Hollywood entertainment culture transformed a pharmaceutical concoction into a celebrity fetish, a coveted commodity of the fad-prone glamour set. Female entertainers toted theirs in chic pill boxes designed especially for tranquilizers, which became, according to one celebrity, as ubiquitous at Hollywood parties as the climatically unnecessary mink coat…

Miltown even inspired a barrage of new alcoholic temptations, in which the pill was the new defining ingredient. The Miltown Cocktail was a Bloody Mary (vodka and tomato juice) spiked with a single pill, and a Guided Missile, popular among the late night crowd on the Sunset Strip, consisted of a double shot of vodka and two Miltowns. More popular still was the Miltini, a dry martini in which Miltown replaced the customary olive.

Andrea Tone’s book is full of surprising snippets about how tranquilisers and anti-anxiety drugs have affected our understanding of ourselves and our culture.

It’s very well researched and manages to hit that niche of being gripping for the non-specialist while being extensive enough that professionals will learn a lot.

Link to details for The Age of Anxiety book.

2013-04-27 Spike activity

Quick links from the past week in mind and brain news:

Psychiatry needs its Higgs boson moment says and article in New Scientist which describes some interesting but disconnected findings suggesting it ‘aint going to get it soon.

Wall Street Journal has an overenthusiastic article on how advances in genetics and neuroscience are ‘revolutionizing’ our understanding of violent behavior. Not quite but not a bad read in parts.

The new series of BBC Radio 4 wonderful series of key studies in psychology, Mind Changers, has just started. Streamed only because the BBC think radio simulations are cute.

Reuters reports that fire kills dozens in Russian psychiatric hospital tragedy.

Author and psychologist Charles Fernyhough discusses how neuroscience is dealt with in literary fiction in a piece for The Guardian.

Nature profiles one of the few people doing gun violence research in the US – the wonderfully named emergency room doctor Garen Wintemute.

The Man With Uncrossed Eyes. Fascinating case study covered by Neuroskeptic.

Wired reports that scientists have built a baseball-playing robot with 100,000-neuron fake brain. To the bunkers!

“Let’s study Tamerlan Tsarnaev’s brain” – The now seemingly compulsory article that argues for some sort of pointless scientific investigation after some horrible tragedy appears in the Boston Globe. See also: Let’s study the Newtown shooter’s DNA.

Wired report from a recent conference on the medical potential of psychedelic drugs.

Adam Phillips, one of the most thoughtful and interesting of the new psychoanalyst writers, is profiled by Newsweek.

Deeper into genetic challenges to psychiatric diagnosis

For my recent Observer article I discussed how genetic findings are providing some of the best evidence that psychiatric diagnoses do not represent discrete disorders.

As part of that I spoke to Michael Owen, a psychiatrist and researcher based at Cardiff University, who has been leading lots of the rethink on the nature of psychiatric disorders.

As a young PhD student I sat in on lots of Prof Owen’s hospital ward rounds and learnt a great deal about how science bumps up against the real world of individuals’ lives.

One of the things that most interested me about Owen’s work is that, back in the day, he was working towards finding ‘the genetics of’ schizophrenia, bipolar and so on.

But since then he and his colleagues have gathered a great deal of evidence that certain genetic differences raise the chances of developing a whole range of difficulties – from epilepsy to schizophrenia to ADHD – rather these differences being associated with any one disorder.

As many of these genetic changes can affect brain development in subtle ways, it is looking increasingly likely that genetics determines how sensitive we are to life events as the brain grows and develops – suggesting a neurodevelopmental theory of these disorders that considers both neurobiology and life experience as equally important.

I asked Owen several questions for the Observer article but I couldn’t reply the answers in full, so I’ve reproduced them below as they’re a fascinating insight into how genetics is challenging psychiatry.

I remember you looking for the ‘genes for schizophrenia’ – what changed your mind?

For most of our genetic studies we used conventional diagnostic criteria such as schizophrenia, bipolar disorder and ADHD. However, what we then did was look for overlap between the genetic signals across diagnostic categories and found that these were striking. This occurred not just for schizophrenia and bipolar disorder, which to me as an adult psychiatrist who treats these conditions was not surprising, but also between adult disorders like schizophrenia and childhood disorders like autism and ADHD.

What do the current categories of psychiatric diagnosis represent?

The current categories were based on the categories in general use by psychiatrists. They were formalized to make them more reliable and have been developed over the years to take into account developments in thinking and practice. They are broad groupings of patients based upon the clinical presentation especially the most prominent symptoms and other factors such as age at onset, and course of illness. In other words they describe syndromes (clinically recognizable features that tend to occur together) rather than distinct diseases. They are clinically useful in so far as they group patients in regard to potential treatments and likely outcome. The problem is that many doctors and scientists have come to assume that they do in fact represent distinct diseases with separate causes and distinct mechanisms. In fact the evidence, not just from molecular genetics, suggests that there is no clear demarcation between diagnostic categories in symptoms or causes (genetic or environmental).

There is an emerging belief which has been stimulated by recent genetic findings that it is perhaps best to view psychiatric disorders more in terms of constellations of symptoms and syndromes, which cross current diagnostic categories and view these in dimensional terms. This is reflected by the inclusion of dimensional measures in DSM5, which, it is hoped, will allow these new views to stimulate research and to be developed based on evidence.

In the meantime the current categories, slightly modified, remain the focus of DSM-5. But I think that there is a much greater awareness now that these are provisional and will replaced when the weight of scientific evidence is sufficiently strong.

The implications of recent findings are probably more pressing for research where there is a need to be less constrained by current diagnostic categories and to refocus onto the mechanisms underlying symptom domains rather than diagnostic categories. This in turn might lead to new diagnostic systems and markers. The discovery of specific risk genes that cut across diagnostic groupings offers one approach to investigating this that we will take forward in Cardiff.

There is a lot of talk of endophenotypes and intermediate phenotypes that attempt to break down symptoms into simpler form of difference and dysfunction in the mind and brain. How will we know when we have found a valid one?

Research into potential endophenotypes has clear intuitive appeal but I think interpretation of the findings is hampered by a couple of important conceptual issues. First, as you would expect from what I have already said, I don’t think we can expect to find endophenotypes for a diagnostic group as such. Rather we might expect them to relate to specific subcomponents of the syndrome (symptoms, groups of symptoms etc).

Second, the assumption that a putative endophenotype lies on the disease pathway (ie is intermediate between say gene and clinical phenotype) has to be proved and cannot just be assumed. For example there has been a lot of work on cognitive dysfunction and brain imaging in psychiatry and widespread abnormalities have been reported. But it cannot be assumed that an individual cognitive or imaging phenotype lies on the pathway to a particular clinical disorder of component of the disorder. This has to be proven either through an intervention study in humans or model systems (both currently challenging), or statistically which requires much larger studies than are usually undertaken. I think that many of the findings from imaging and cognition studies will turn out to be part of the broad phenotype resulting from whatever brain dysfunction is present and not on the causal pathway to psychiatric disorder.

Using the tools of biological psychiatry you have come to a conclusion often associated with psychiatry’s critics (that the diagnostic categories do not represent specific disorders). What reactions have you encountered from mainstream psychiatry?

I have found that most psychiatrists working at the front line are sympathetic. In fact psychiatrists already treat symptoms rather than diagnoses. For example they will consider prescribing an antipsychotic if someone is psychotic regardless of whether the diagnosis is schizophrenia or bipolar disorder. They also recognize that many patients don’t fall neatly into current categories. For example many patients have symptoms of both schizophrenia and bipolar disorder sometimes at the same time and sometimes at different time points. Also patients who fulfill diagnostic criteria for schizophrenia in adulthood often have histories of childhood diagnoses such as ADHD or autistic spectrum.

The inertia comes in part from the way in which services are structured. In particular the distinction between child and adult services has many justifications but it leads to patents with long term problems being transferred to a new team at a vulnerable age, receiving different care and sometimes a change in diagnosis. Many of us now feel that we should develop services that span late childhood and early adulthood to ensure continuity over this important period. There are also international differences. So in the US mood disorders (including bipolar) are often treated by different doctors in different clinics to schizophrenia.

There is also a justifiable unwillingness to discard the current system until there is strong evidence for a better approach. The inclusion of dimensional measures in DSM5 reflects the acceptance of the psychiatric establishment that change is needed and acknowledges the likely direction of travel. I think that psychiatry’s acknowledgment of its diagnostic shortcomings is a sign of its maturity. Psychiatric disorders are the most complex in medicine and some of the most disabling. We have treatments that help some of the people some of the time and we need to target these to the right people at the right time. By acknowledging the shortcomings of our current diagnostic categories we are recognizing the need to treat patients as individuals and the fact that the outcome of psychiatric disorders is highly variable.

Like a part of me is missing

Matter magazine has an amazing article about the world of underground surgery for healthy people who feel that their limb is not part of their body and needs to be removed.

The condition is diagnosed as body integrity identity disorder or BIID but it has a whole range of interests and behaviours associated with it and people with the desire often do not feel it is a disorder in itself.

Needless to say, surgeons have not been lining up to amputate completely healthy limbs but there are clinics around the world that do the operations illegally.

The Matter article follows someone as they obtain one of these procedures and discusses the science of why someone might feel so uncomfortable about having a working limb they were born with.

But there is a particularly eye-opening bit where it mentions something fascinating about the first scientific article that discussed the condition, published in 1977.

One of the co-authors of the 1977 paper was Gregg Furth, who eventually became a practising psychologist in New York. Furth himself suffered from the condition and, over time, became a major figure in the BIID underground. He wanted to help people deal with their problem, but medical treatment was always controversial — often for good reason. In 1998, Furth introduced a friend to an unlicensed surgeon who agreed to amputate the friend’s leg in a Tijuana clinic. The patient died of gangrene and the surgeon was sent to prison. A Scottish surgeon named Robert Smith, who practised at the Falkirk and District Royal Infirmary, briefly held out legal hope for BIID sufferers by openly performing voluntary amputations, but a media frenzy in 2000 led British authorities to forbid such procedures. The Smith affair fuelled a series of articles about the condition — some suggesting that merely identifying and defining such a condition could cause it to spread, like a virus.

Undeterred, Furth found a surgeon in Asia who was willing to perform amputations for about $6,000. But instead of getting the surgery himself, he began acting as a go-between, putting sufferers in touch with the surgeon.


Link to Matter article on the desire to be an amputee.

What does it take to spark prejudice?

Short answer: surprisingly little. Continuing the theme of revisiting classic experiments in psychology, last week’s BBC Future column was on Tajfel’s Minimal Group Paradigm. The original is here. Next week we’re going to take this foundation and look at some evolutionary psychology of racism (hint: it won’t be what you’d expect).

How easy is it for the average fair-minded person to form biased, preconceived views within groups? Surprisingly easy, according to psychology studies.

One of the least charming but most persistent aspects of human nature is our capacity to hate people who are different. Racism, sexism, ageism, it seems like all the major social categories come with their own “-ism”, each fuelled by regrettable prejudice and bigotry.

Our tendency for groupness appears to be so strong there seems little more for psychology to teach us. It’s not as if we need it proven that favouring our group over others is a common part of how people think – history provides all the examples we need. But one psychologist, Henri Tajfel, taught us something important. He showed exactly how little encouragement we need to treat people in a biased way because of the group they are in.

Any phenomenon like this in the real world comes entangled with a bunch of other, complicating phenomenon. When we see prejudice in the everyday world it is hard to separate out psychological biases from the effects of history, culture and even pragmatism (sometimes people from other groups really are out to get you).

As a social psychologist, Tajfel was interested in the essential conditions of group prejudice. He wanted to know what it took to turn the average fair-minded human into their prejudiced cousin.

He wanted to create a microscope for looking at how we think when we’re part of a group, even when that group has none of the history, culture or practical importance that groups normally do. To look at this, he devised what has become known as the “minimal group paradigm

The minimal group paradigm works like this: participants in the experiment are divided into groups on some arbitrary basis. Maybe eye-colour, maybe what kind of paintings they like, or even by tossing a coin. It doesn’t matter what the basis for group membership is, as long as everyone gets a group and knows what it is. After being told they are in a group, participants are divided up so that they are alone when they make a series of choices about how rewards will be shared among other people in the groups. From this point on, group membership is entirely abstract. Nobody else can be seen, and other group members are referred to by an anonymous number. Participants make choices such as “Member Number 74 (group A) to get 10 points and Member 44 (group B) to get 8 points”, versus “Member Number 74 (group A) to get 2 points and Member 44 (group B) to get 6 points”, where the numbers are points which translate into real money.

You won’t be surprised to learn that participants show favouritism towards their own group when dividing the money. People in group A were more likely to choose the first option I gave above, rather than the second. What is more surprising is that people show some of this group favouritism even when it ends up costing them points – so people in group B sometimes choose the second option, or options like it, even though it provides fewer points than the first option. People tend to opt for the maximum total reward (as you’d expect from the fair-minded citizen), but they also show a tendency to maximise the difference between the groups (what you’d expect from the prejudiced cousin).

The effect may be small, but this is a situation where the groups have been plucked out of the air by the experimenters. Every participant knows which group he or she is in, but they also know that they weren’t in this group before they started the experiment, that their assignment was arbitrary or completely random, and that the groups aren’t going to exist in any meaningful way after the experiment. They also know that their choices won’t directly affect them (they are explicitly told that they won’t be given any choices to make about themselves). Even so, this situation is enough to evoke favouritism.

So, it seems we’ll take the most minimal of signs as a cue to treat people differently according to which group they are in. Tajfel’s work suggests that in-group bias is as fundamental to thinking as the act of categorisations itself. If we want to contribute to a fairer world we need to be perpetually on guard to avoid letting this instinct run away with itself.

A stiff moment in scientific history

Photo by Flickr user NASA's Marshall Space Flight Center. Click for source.In 1983 psychiatrist Giles Brindley demonstrated the first drug treatment for erectile dysfunction in a rather unique way. He took the drug and demonstrated his stiff wicket to the audience mid-way through his talk.

Scientific journal BJU International has a pant-wettingly hilarious account of the events of that day which made both scientific and presentation history.

Professor Brindley, still in his blue track suit, was introduced as a psychiatrist with broad research interests. He began his lecture without aplomb. He had, he indicated, hypothesized that injection with vasoactive agents into the corporal bodies of the penis might induce an erection. Lacking ready access to an appropriate animal model, and cognisant of the long medical tradition of using oneself as a research subject, he began a series of experiments on self-injection of his penis with various vasoactive agents, including papaverine, phentolamine, and several others. (While this is now commonplace, at the time it was unheard of). His slide-based talk consisted of a large series of photographs of his penis in various states of tumescence after injection with a variety of doses of phentolamine and papaverine. After viewing about 30 of these slides, there was no doubt in my mind that, at least in Professor Brindley’s case, the therapy was effective. Of course, one could not exclude the possibility that erotic stimulation had played a role in acquiring these erections, and Professor Brindley acknowledged this.

The Professor wanted to make his case in the most convincing style possible. He indicated that, in his view, no normal person would find the experience of giving a lecture to a large audience to be erotically stimulating or erection-inducing. He had, he said, therefore injected himself with papaverine in his hotel room before coming to give the lecture, and deliberately wore loose clothes (hence the track-suit) to make it possible to exhibit the results. He stepped around the podium, and pulled his loose pants tight up around his genitalia in an attempt to demonstrate his erection.

At this point, I, and I believe everyone else in the room, was agog. I could scarcely believe what was occurring on stage. But Prof. Brindley was not satisfied. He looked down sceptically at his pants and shook his head with dismay. ‘Unfortunately, this doesn’t display the results clearly enough’. He then summarily dropped his trousers and shorts, revealing a long, thin, clearly erect penis. There was not a sound in the room. Everyone had stopped breathing.

But the mere public showing of his erection from the podium was not sufficient. He paused, and seemed to ponder his next move. The sense of drama in the room was palpable. He then said, with gravity, ‘I’d like to give some of the audience the opportunity to confirm the degree of tumescence’. With his pants at his knees, he waddled down the stairs, approaching (to their horror) the urologists and their partners in the front row. As he approached them, erection waggling before him, four or five of the women in the front rows threw their arms up in the air, seemingly in unison, and screamed loudly. The scientific merits of the presentation had been overwhelmed, for them, by the novel and unusual mode of demonstrating the results.

The screams seemed to shock Professor Brindley, who rapidly pulled up his trousers, returned to the podium, and terminated the lecture. The crowd dispersed in a state of flabbergasted disarray. I imagine that the urologists who attended with their partners had a lot of explaining to do. The rest is history. Prof Brindley’s single-author paper reporting these results was published about 6 months later.


Link to full account of that fateful day (via @DrPetra)

Amid the borderlands

I’ve got an article in The Observer on how some of the best evidence against the idea that psychiatric diagnoses like ‘schizophrenia’ describe discrete ‘diseases’ comes not from the critics of psychiatry, but from medical genetics.

I found this a fascinating outcome because it puts both sides of the polarised ‘psychiatry divide’ in quite an uncomfortable position.

The “mental illness is a genetic brain disease” folks find that their evidence of choice – molecular genetics – has undermined the validity of individual diagnoses, while the “mental illness is socially constructed” folks find that the best evidence for their claims comes from neurobiology studies.

The evidence that underlies this uncomfortable position comes recent findings that genetic risks that were originally thought to be specific for individual diagnoses turn out to risks for a whole load of later difficulties – from epilepsy, to schizophrenia to learning disability.

In other words, the genetic risk seems to be for neurodevelopmental difficulties but if and how they appear depends on lots of other factors that occur during your life.

The neurobiological evidence has not ‘reduced’ human experience to chemicals, but shown that individual life stories are just as important.

Link to Observer article.
Link to brief scientific review article on the topic.

Why money won’t buy you happiness

Here’s my column for BBC Future from last week. It was originally titled ‘Why money can’t buy you happiness‘, but I’ve just realised that it would be more appropriately titled if I used a “won’t” rather than a “can’t”. There’s a saying that people who think money can’t buy happiness don’t know where to shop. This column says, more or less, that knowing where to shop isn’t the problem, its shopping itself.

Hope a lottery win will make you happy forever? Think again, evidence suggests a big payout won’t make that much of a difference. Tom Stafford explains why.


Think a lottery win would make you happy forever? Many of us do, including a US shopkeeper who just scooped $338 million in the Powerball lottery – the fourth largest prize in the game’s history. Before the last Powerball jackpot in the United States, tickets were being snapped up at a rate of around 130,000 a minute. But before you place all your hopes and dreams on another ticket, here’s something you should know. All the evidence suggests a big payout won’t make that much of a difference in the end.

Winning the lottery isn’t a ticket to true happiness, however enticing it might be to imagine never working again and being able to afford anything you want. One study famously found that people who had big wins on the lottery ended up no happier than those who had bought tickets but didn’t win. It seems that as long as you can afford to avoid the basic miseries of life, having loads of spare cash doesn’t make you very much happier than having very little.

One way of accounting for this is to assume that lottery winners get used to their new level of wealth, and simply adjust back to a baseline level of happiness –something called the “hedonic treadmill”. Another explanation is that our happiness depends on how we feel relative to our peers. If you win the lottery you may feel richer than your neighbours, and think that moving to a mansion in a new neighbourhood would make you happy, but then you look out of the window and realise that all your new friends live in bigger mansions.

Both of these phenomena undoubtedly play a role, but the deeper mystery is why we’re so bad at knowing what will give us true satisfaction in the first place. You might think we should be able to predict this, even if it isn’t straightforward. Lottery winners could take account of hedonic treadmill and social comparison effects when they spend their money. So, why don’t they, in short, spend their winnings in ways that buy happiness?

Picking up points

Part of the problem is that happiness isn’t a quality like height, weight or income that can be easily measured and given a number (whatever psychologists try and pretend). Happiness is a complex, nebulous state that is fed by transient simple pleasures, as well as the more sustained rewards of activities that only make sense from a perspective of years or decades. So, perhaps it isn’t surprising that we sometimes have trouble acting in a way that will bring us the most happiness. Imperfect memories and imaginations mean that our moment-to-moment choices don’t always reflect our long-term interests.

It even seems like the very act of trying to measuring it can distract us from what might make us most happy. An important study by Christopher Hsee of the Chicago School of Business and colleagues showed how this could happen.

Hsee’s study was based around a simple choice: participants were offered the option of working at a 6-minute task for a gallon of vanilla ice cream reward, or a 7-minute task for a gallon of pistachio ice cream. Under normal conditions, less than 30% of people chose the 7-minute task, mainly because they liked pistachio ice cream more than vanilla. For happiness scholars, this isn’t hard to interpret –those who preferred pistachio ice cream had enough motivation to choose the longer task. But the experiment had a vital extra comparison. Another group of participants were offered the same choice, but with an intervening points system: the choice was between working for 6 minutes to earn 60 points, or 7 minutes to earn 100 points. With 50-99 points, participants were told they could receive a gallon of vanilla ice cream. For 100 points they could receive a gallon of pistachio ice cream. Although the actions and the effects are the same, introducing the points system dramatically affected the choices people made. Now, the majority chose the longer task and earn the 100 points, which they could spend on the pistachio reward – even though the same proportion (about 70%) still said they preferred vanilla.

Based on this, and other experiments [5], Hsee concluded that participants are maximising their points at the expense of maximising their happiness. The points are just a medium – something that allows us to get the thing that will create enjoyment. But because the points are so easy to measure and compare – 100 is obviously much more than 60 – this overshadows our knowledge of what kind of ice cream we enjoy most.

So next time you are buying a lottery ticket because of the amount it is paying out, or choosing wine by looking at the price, or comparing jobs by looking at the salaries, you might do well to remember to think hard about how much the bet, wine, or job will really promote your happiness, rather than simply relying on the numbers to do the comparison. Money doesn’t buy you happiness, and part of the reason for that might be that money itself distracts us from what we really enjoy.


A cuckoo’s nest museum

The New York Times reports that the psychiatric hospital used as the backdrop for the 1975 film One Flew Over the Cuckoo’s Nest has been turned into a museum of mental health.

In real life the institution was Oregon State Hospital and the article is accompanied by a slide show of images from the hospital and museum.

The piece also mentions some fascinating facts about the film – not least that some of the actors were actually genuine employees and patients in the hospital.

But the melding of real life and art went far beyond the film set. Take the character of John Spivey, a doctor who ministers to Jack Nicholson’s doomed insurrectionist character, Randle McMurphy. Dr. Spivey was played by Dr. Dean Brooks, the real hospital’s superintendent at the time.

Dr. Brooks read for the role, he said, and threw the script to the floor, calling it unrealistic — a tirade that apparently impressed the director, Milos Forman. Mr. Forman ultimately offered him the part, Dr. Brooks said, and told the doctor-turned-actor to rewrite his lines to make them medically correct. Other hospital staff members and patients had walk-on roles.


Link to NYT article ‘Once a ‘Cuckoo’s Nest,’ Now a Museum’.