Implicit attitudes are one of the hottest topics in social psychology. Now a massive new study directly compares methods for changing them. The results are both good and bad for those who believe that some part of prejudice is our automatic, uncontrollable, reactions to different social groups.
Chronicler of the wild side, Lou Reed, has died. Reed was particularly notable for students of human nature for his descriptions of drugs, madness and his own experience of psychiatry.
We’ve touched on his outrageous performance to the New York Society for Clinical Psychiatry before and his songs about or featuring drug use are legendary.
But there was one song that was particularly notable – not least for describing from his own experience of being ‘treated’ for homosexuality with electroshock therapy when he was a teenager.
Kill Your Sons, released in 1974 (audio), is just a straight-out attack on the psychiatrists that treated him:
All your two-bit psychiatrists
are giving you electroshock
They said, they’d let you live at home with mom and dad
instead of mental hospitals
But every time you tried to read a book
you couldn’t get to page 17
‘Cause you forgot where you were
so you couldn’t even read
Here Reed describes the effects on memory that are common just after electroconvulsive therapy. In this case, forgetting what you’ve just read.
The last verse also describes some of his other contacts with psychiatry, mentioning specific psychiatric clinics and medications:
Creedmore treated me very good
but Paine Whitney was even better
And when I flipped out on PHC
I was so sad, I didn’t even get a letter
All of the drugs, that we took
it really was lots of fun
But when they shoot you up with Thorazine on crystal smoke
you choke like a son of a gun
The last line seems to refer to the effect of being given a dopamine-inhibiting antipsychotic when you’re on a dopamine boosting amphetamine – presumably after being taken to a psychiatric clinic while still high. Not a pleasant comedown I would imagine.
I have no idea what ‘PHC’ refers to, though. I’m guessing it’s a psychiatric treatment from the 60s.
It’s interesting that the song was released the year after homosexuality was removed from the DSM in 1973, although it’s never been clear whether this was intentional on Reed’s part or not.
Link to YouTube audio of Kill Your Sons.
Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.
For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.
When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.
Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.
Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.
The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).
Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.
It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.
So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).
Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.
Short answer: surprisingly little. Continuing the theme of revisiting classic experiments in psychology, last week’s BBC Future column was on Tajfel’s Minimal Group Paradigm. The original is here. Next week we’re going to take this foundation and look at some evolutionary psychology of racism (hint: it won’t be what you’d expect).
How easy is it for the average fair-minded person to form biased, preconceived views within groups? Surprisingly easy, according to psychology studies.
One of the least charming but most persistent aspects of human nature is our capacity to hate people who are different. Racism, sexism, ageism, it seems like all the major social categories come with their own “-ism”, each fuelled by regrettable prejudice and bigotry.
Our tendency for groupness appears to be so strong there seems little more for psychology to teach us. It’s not as if we need it proven that favouring our group over others is a common part of how people think – history provides all the examples we need. But one psychologist, Henri Tajfel, taught us something important. He showed exactly how little encouragement we need to treat people in a biased way because of the group they are in.
Any phenomenon like this in the real world comes entangled with a bunch of other, complicating phenomenon. When we see prejudice in the everyday world it is hard to separate out psychological biases from the effects of history, culture and even pragmatism (sometimes people from other groups really are out to get you).
As a social psychologist, Tajfel was interested in the essential conditions of group prejudice. He wanted to know what it took to turn the average fair-minded human into their prejudiced cousin.
He wanted to create a microscope for looking at how we think when we’re part of a group, even when that group has none of the history, culture or practical importance that groups normally do. To look at this, he devised what has become known as the “minimal group paradigm”
The minimal group paradigm works like this: participants in the experiment are divided into groups on some arbitrary basis. Maybe eye-colour, maybe what kind of paintings they like, or even by tossing a coin. It doesn’t matter what the basis for group membership is, as long as everyone gets a group and knows what it is. After being told they are in a group, participants are divided up so that they are alone when they make a series of choices about how rewards will be shared among other people in the groups. From this point on, group membership is entirely abstract. Nobody else can be seen, and other group members are referred to by an anonymous number. Participants make choices such as “Member Number 74 (group A) to get 10 points and Member 44 (group B) to get 8 points”, versus “Member Number 74 (group A) to get 2 points and Member 44 (group B) to get 6 points”, where the numbers are points which translate into real money.
You won’t be surprised to learn that participants show favouritism towards their own group when dividing the money. People in group A were more likely to choose the first option I gave above, rather than the second. What is more surprising is that people show some of this group favouritism even when it ends up costing them points – so people in group B sometimes choose the second option, or options like it, even though it provides fewer points than the first option. People tend to opt for the maximum total reward (as you’d expect from the fair-minded citizen), but they also show a tendency to maximise the difference between the groups (what you’d expect from the prejudiced cousin).
The effect may be small, but this is a situation where the groups have been plucked out of the air by the experimenters. Every participant knows which group he or she is in, but they also know that they weren’t in this group before they started the experiment, that their assignment was arbitrary or completely random, and that the groups aren’t going to exist in any meaningful way after the experiment. They also know that their choices won’t directly affect them (they are explicitly told that they won’t be given any choices to make about themselves). Even so, this situation is enough to evoke favouritism.
So, it seems we’ll take the most minimal of signs as a cue to treat people differently according to which group they are in. Tajfel’s work suggests that in-group bias is as fundamental to thinking as the act of categorisations itself. If we want to contribute to a fairer world we need to be perpetually on guard to avoid letting this instinct run away with itself.
Here is my latest BBC Future column. The original is here. This one proved to be more than usually controversial, not least because of some poorly chosen phrasing from yours truly. This is an updated version which makes what I’m trying to say clearer. If you think that I hate cyclists, or my argument relies on the facts of actual law breaking (by cyclists or drivers), or that I am making a claim about the way the world ought to be (rather than how people see it), then please check out this clarification I published on my personal blog after a few days of feedback from the column. One thing the experience has convinced me of is that cycling is a very emotional issue, and one people often interpret in very moral terms.
It’s not simply because they are annoying, argues Tom Stafford, it’s because they trigger a deep-seated rage within us by breaking the moral order of the road.
Something about cyclists seems to provoke fury in other road users. If you doubt this, try a search for the word “cyclist” on Twitter. As I write this one of the latest tweets is this: “Had enough of cyclists today! Just wanna ram them with my car.” This kind of sentiment would get people locked up if directed against an ethnic minority or religion, but it seems to be fair game, in many people’s minds, when directed against cyclists. Why all the rage?
I’ve got a theory, of course. It’s not because cyclists are annoying. It isn’t even because we have a selective memory for that one stand-out annoying cyclist over the hundreds of boring, non-annoying ones (although that probably is a factor). No, my theory is that motorists hate cyclists because they offend the moral order.
Driving is a very moral activity – there are rules of the road, both legal and informal, and there are good and bad drivers. The whole intricate dance of the rush-hour junction only works because everybody knows the rules and follows them: keeping in lane; indicating properly; first her turn, now mine, now yours. Then along come cyclists, innocently following what they see as the rules of the road, but doing things that drivers aren’t allowed to: overtaking queues of cars, moving at well below the speed limit or undertaking on the inside.
You could argue that driving is like so much of social life, it’s a game of coordination where we have to rely on each other to do the right thing. And like all games, there’s an incentive to cheat. If everyone else is taking their turn, you can jump the queue. If everyone else is paying their taxes you can dodge them, and you’ll still get all the benefits of roads and police.
In economics and evolution this is known as the “free rider problem”; if you create a common benefit – like taxes or orderly roads – what’s to stop some people reaping the benefit without paying their dues? The free rider problem creates a paradox for those who study evolution, because in a world of selfish genes it appears to make cooperation unlikely. Even if a bunch of selfish individuals (or genes) recognise the benefit of coming together to co-operate with each other, once the collective good has been created it is rational, in a sense, for everyone to start trying to freeload off the collective. This makes any cooperation prone to collapse. In small societies you can rely on cooperating with your friends, or kin, but as a society grows the problem of free-riding looms larger and larger.
Humans seem to have evolved one way of enforcing order onto potentially chaotic social arrangements. This is known as “altruistic punishment”, a term used by Ernst Fehr and Simon Gachter in a landmark paper published in 2002 . An altruistic punishment is a punishment that costs you as an individual, but doesn’t bring any direct benefit. As an example, imagine I’m at a football match and I see someone climb in without buying a ticket. I could sit and enjoy the game (at no cost to myself), or I could try to find security to have the guy thrown out (at the cost of missing some of the game). That would be altruistic punishment.
Altruistic punishment, Fehr and Gachter reasoned, might just be the spark that makes groups of unrelated strangers co-operate. To test this they created a co-operation game played by constantly shifting groups of volunteers, who never meet – they played the game from a computer in a private booth. The volunteers played for real money, which they knew they would take away at the end of the experiment. On each round of the game each player received 20 credits, and could choose to contribute up to this amount to a group project. After everyone had chipped in (or not), everybody (regardless of investment) got 40% of the collective pot.
Under the rules of the game, the best collective outcome would be if everyone put in all their credits, and then each player would get back more than they put in. But the best outcome for each individual was to free ride – to keep their original 20 credits, and also get the 40% of what everybody else put in. Of course, if everybody did this then that would be 40% of nothing.
In this scenario what happened looked like a textbook case of the kind of social collapse the free rider problem warns of. On each successive turn of the game, the average amount contributed by players went down and down. Everybody realised that they could get the benefit of the collective pot without the cost of contributing. Even those who started out contributing a large proportion of their credits soon found out that not everybody else was doing the same. And once you see this it’s easy to stop chipping in yourself – nobody wants to be the sucker.
Rage against the machine
A simple addition to the rules reversed this collapse of co-operation, and that was the introduction of altruistic punishment. Fehr and Gachter allowed players to fine other players credits, at a cost to themselves. This is true altruistic punishment because the groups change after each round, and the players are anonymous. There may have been no direct benefit to fining other players, but players fined often and they fined hard – and, as you’d expect, they chose to fine other players who hadn’t chipped in on that round. The effect on cooperation was electric. With altruistic punishment, the average amount each player contributed rose and rose, instead of declining. The fine system allowed cooperation between groups of strangers who wouldn’t meet again, overcoming the challenge of the free rider problem.
How does this relate to why motorists hate cyclists? The key is in a detail from that classic 2002 paper. Did the players in this game sit there calmly calculating the odds, running game theory scenarios in their heads and reasoning about cost/benefit ratios? No, that wasn’t the immediate reason people fined players. They dished out fines because they were mad as hell. Fehr and Gachter, like the good behavioural experimenters they are, made sure to measure exactly how mad that was, by asking players to rate their anger on a scale of one to seven in reaction to various scenarios. When players were confronted with a free-rider, almost everyone put themselves at the upper end of the anger scale. Fehr and Gachter describe these emotions as a “proximate mechanism”. This means that evolution has built into the human mind a hatred of free-riders and cheaters, which activates anger when we confront people acting like this – and it is this anger which prompts altruistic punishment. In this way, the emotion is evolution’s way of getting us to overcome our short-term self-interest and encourage collective social life.
So now we can see why there is an evolutionary pressure pushing motorists towards hatred of cyclists. Deep within the human psyche, fostered there because it helps us co-ordinate with strangers and so build the global society that is a hallmark of our species, is an anger at people who break the rules, who take the benefits without contributing to the cost. And cyclists trigger this anger when they use the roads but don’t follow the same rules as cars.
Now cyclists reading this might think “but the rules aren’t made for us – we’re more vulnerable, discriminated against, we shouldn’t have to follow the rules.” Perhaps true, but irrelevant when other road-users see you breaking rules they have to keep. Maybe the solution is to educate drivers that cyclists are playing an important role in a wider game of reducing traffic and pollution. Or maybe we should just all take it out on a more important class of free-riders, the tax-dodgers.
My BBC Future column from last week. The original is here. I started out trying to write about research using economic games with apes and monkeys but I got so bogged down in the literature I switched to this neat experiment instead. Ed Yong is a better man than me and wrote a brilliant piece about that research, which you can find here.
It’s a question humanity has repeatedly asked itself, and one way to find out is to take a closer look at the behaviour of babies.… and use puppets.
Fundamentally speaking, are humans good or bad? It’s a question that has repeatedly been asked throughout humanity. For thousands of years, philosophers have debated whether we have a basically good nature that is corrupted by society, or a basically bad nature that is kept in check by society. Psychology has uncovered some evidence which might give the old debate a twist.
One way of asking about our most fundamental characteristics is to look at babies. Babies’ minds are a wonderful showcase for human nature. Babies are humans with the absolute minimum of cultural influence – they don’t have many friends, have never been to school and haven’t read any books. They can’t even control their own bowels, let alone speak the language, so their minds are as close to innocent as a human mind can get.
The only problem is that the lack of language makes it tricky to gauge their opinions. Normally we ask people to take part in experiments, giving them instructions or asking them to answer questions, both of which require language. Babies may be cuter to work with, but they are not known for their obedience. What’s a curious psychologist to do?
Fortunately, you don’t necessarily have to speak to reveal your opinions. Babies will reach for things they want or like, and they will tend to look longer at things that surprise them. Ingenious experiments carried out at Yale University in the US used these measures to look at babies’ minds. Their results suggest that even the youngest humans have a sense of right and wrong, and, furthermore, an instinct to prefer good over evil.
How could the experiments tell this? Imagine you are a baby. Since you have a short attention span, the experiment will be shorter and loads more fun than most psychology experiments. It was basically a kind of puppet show; the stage a scene featuring a bright green hill, and the puppets were cut-out shapes with stick on wobbly eyes; a triangle, a square and a circle, each in their own bright colours. What happened next was a short play, as one of the shapes tried to climb the hill, struggling up and falling back down again. Next, the other two shapes got involved, with either one helping the climber up the hill, by pushing up from behind, or the other hindering the climber, by pushing back from above.
Already something amazing, psychologically, is going on here. All humans are able to interpret the events in the play in terms of the story I’ve described. The puppets are just shapes. They don’t make human sounds or display human emotions. They just move about, and yet everyone reads these movements as purposeful, and revealing of their characters. You can argue that this “mind reading”, even in infants, shows that it is part of our human nature to believe in other minds.
What happened next tells us even more about human nature. After the show, infants were given the choice of reaching for either the helping or the hindering shape, and it turned out they were much more likely to reach for the helper. This can be explained if they are reading the events of the show in terms of motivations – the shapes aren’t just moving at random, but they showed to the infant that the shape pushing uphill “wants” to help out (and so is nice) and the shape pushing downhill “wants” to cause problems (and so is nasty).
The researchers used an encore to confirm these results. Infants saw a second scene in which the climber shape made a choice to move towards either the helper shape or the hinderer shape. The time infants spent looking in each of the two cases revealed what they thought of the outcome. If the climber moved towards the hinderer the infants looked significantly longer than if the climber moved towards the helper. This makes sense if the infants were surprised when the climber approached the hinderer. Moving towards the helper shape would be the happy ending, and obviously it was what the infant expected. If the climber moved towards the hinderer it was a surprise, as much as you or I would be surprised if we saw someone give a hug to a man who had just knocked him over.
The way to make sense of this result is if infants, with their pre-cultural brains had expectations about how people should act. Not only do they interpret the movement of the shapes as resulting from motivations, but they prefer helping motivations over hindering ones.
This doesn’t settle the debate over human nature. A cynic would say that it just shows that infants are self-interested and expect others to be the same way. At a minimum though, it shows that tightly bound into the nature of our developing minds is the ability to make sense of the world in terms of motivations, and a basic instinct to prefer friendly intentions over malicious ones. It is on this foundation that adult morality is built.
Psychologically speaking it is a tricky task, because our minds find it difficult to appreciate how the world looks to someone who doesn’t know it yet.
We’ve all been there – the directions sounded so clear when we were told them. Every step of the journey seemed obvious, we thought we had understood the directions perfectly. And yet here we are miles from anywhere, after dark, in a field arguing about whether we should have gone left or right at the last turn, whether we’re going to have to sleep here now, and exactly whose fault it is.
The truth is we shouldn’t be too hard on ourselves. Psychologically speaking giving good directions is a particularly difficult task.
The reason we find it hard to give good directions is because of the “curse of knowledge”, a psychological quirk whereby, once we have learnt something, we find it hard to appreciate how the world looks to someone who doesn’t know it yet. We don’t just want people to walk a mile in our shoes, we assume they already know the route. Once we know the way to a place we don’t need directions, and descriptions like “its the left about halfway along” or “the one with the little red door” seem to make full and complete sense.
But if you’ve never been to a place before, you need more than a description of a place; you need an exact definition, or a precise formula for finding it. The curse of knowledge is the reason why, when I had to search for a friend’s tent in a field, their advice of “it’s the blue one” seemed perfectly sensible to them and was completely useless for me, as I stood there staring blankly at hundreds of blue tents.
This same quirk is why teaching is so difficult to do well. Once you are familiar with a topic it is very hard to understand what someone who isn’t familiar with it needs to know. The curse of knowledge isn’t a surprising flaw in our mental machinery – really it is just a side effect of our basic alienation from each other. We all have different thoughts and beliefs, and we have no special access to each other’s minds. A lot of the time we can fake understanding by mentally simulating what we’d want in someone else’s position. We have thoughts along the lines of “I’d like it if there was one bagel left in the morning” and therefore conclude “so I won’t eat all the bagels before my wife gets up in the morning”. This shortcut allows us to appear considerate, without doing any deep thought about what other people really know and want.
“OK, now what?”
This will only get you so far. Some occasions call for a proper understanding of other people’s feelings and beliefs. Giving directions is one, but so is understanding myriad aspects of everyday conversation which involve feelings, jokes or suggestions. For illustration, consider the joke that some research has suggested may be the world’s funniest (although what exactly that means is another story):
Two hunters are out in the woods when one of them collapses. He doesn’t seem to be breathing and his eyes are glazed. The other guy whips out his phone and calls the emergency services. He gasps, “My friend is dead! What can I do?” The operator says “Calm down. I can help. First, let’s make sure he’s dead.” There is a silence, then a shot is heard. Back on the phone, the guy says “OK, now what?”
The joke is funny because you can appreciate that the hunter had two possible interpretations of the operator’s instructions, and chose the wrong one. To appreciate the interpretations you need to have a feel for what the operator and the hunter know and desire (and to be surprised when the hunter’s desire to do anything to help isn’t over-ruled by a desire keep his friend alive).
To do this mental simulation you recruit what psychologists call your “Theory of Mind”, the ability think about others’ beliefs and desires. Our skill at Theory of Mind is one of the things that distinguish humans from all other species – only chimpanzees seem to have anything approaching a true understanding that others’ might believe different things from themselves. Us humans, on the other hand, seem primed from early infancy to practice thinking about how other humans view the world.
The fact that the curse of knowledge exists tells us how hard a problem it is to think about other people’s minds. Like many hard cognitive problems – such as seeing, for example – the human brain has evolved specialist mechanisms which are dedicate to solving it for us, so that we don’t normally have to expend conscious effort. Most of the time we get the joke, just as most of the time we simply open our eyes and see the world.
The good news is that your Theory of Mind isn’t completely automatic – you can use deliberate strategies to help you think about what other people know. A good one when writing is simply to force yourself to check every term to see if it is jargon – something you’ve learnt the meaning of but not all your readers will know. Another strategy is to tell people what they can ignore, as well as what they need to know. This works well with directions (and results in instructions like “keep going until you see the red door. There’s a pink door, but that’s not it”)
With a few tricks like this, and perhaps some general practice, we can turn the concept of reading other people’s minds – what some psychologists call “mind mindfulness” – into a habit, and so improve our Theory of Mind abilities. (Something that most of us remember struggling hard to do in adolescence.) Which is a good thing, since good theory of mind is what makes a considerate partner, friend or co-worker – and a good giver of directions.