Radical embodied cognition: an interview with Andrew Wilson

adw_headshot_squareThe computational approach is the orthodoxy in psychological science. We try and understand the mind using the metaphors of information processing and the storage and retrieval of representations. These ideas are so common that it is easy to forget that there is any alternative. Andrew Wilson is on a mission to remind us that there is an alternative – a radical, non-representational, non-information processing take on what cognition is.

I sent him a few questions by email. After he answered these, and some follow up questions, we’ve both edited and agreed on the result, which you can read below.

 

Q1. Is it fair to say you are at odds with lots of psychology, theoretically? Can you outline why?

Psychology wants to understand what causes our behaviour. Cognitive psychology explanations are that behaviour is caused by internal states of the mind (or brain, if you like). These states are called mental representations, and they are models/simulations of the world that we use to figure out what to do and when to do it.

Cognitive psychology thinks we have representations because it assumes we have very poor sensory access to the world, e.g. vision supposedly begins with a patchy 2D image projected onto the back of the eye. We need these models to literally fill in the gaps by making an educated guess (‘inference’) about what caused those visual sensations.

My approach is called radical embodied cognitive psychology; ‘radical’ just means ‘no representations’. It is based on the work of James J Gibson. He was a perceptual psychologist who demonstrated that there is actually rich perceptual information about the world, and that we use this information. This is why perception and action are so amazingly successful most of the time, which is important because failures of perception have serious consequences for your health and wellbeing (e.g. falling on ice)

The most important consequence of this discovery is that when we have access to this information, we don’t need those internal models anymore. This then means that whatever the brain is doing, it’s not building models of the world in order to cause our behaviour. We are embedded in our environments and our behaviour is caused by the nature of that embedding (specifically, which information variables we are using for any given task).

So I ask very different questions than the typical psychologist: instead of ‘what mental model lets me solve this task?’ I ask ‘what information is there to support the observed behaviour and can I find evidence that we use it?’. When we get the right answer to the information question, we have great success in explaining and then predicting behaviour, which is actually the goal of psychology.

 

Q2. The idea that there are no mental representations is hard to get your head around. What about situations where behaviour seems to be based on things which aren’t there, like imagination, illusions or predictions?

First, saying that there are no mental representations is not saying that the brain is not up to something. This is a surprisingly common mistake, but I think it’s due to the fact cognitive psychologists have come to equate ‘brain activity’ with ‘representing’ and denying the latter means denying the former (see Is Embodied Cognition a No-Brainer?).

Illusions simply reveal how important it is to perception that we can move and explore. They are all based on a trick and they almost always require an Evil Psychologist™ lurking in the background. Specifically, illusions artificially restrict access to information so that the world looks like it’s doing one thing when it is really doing another. They only work if you don’t let people do anything to reveal the trick. Most visual illusions are revealed as such by exploring them, e.g by looking at them from a different perspective (e.g. the Ames Room).

Imagination and prediction are harder to talk about in this framework, but only because no one’s really tried. For what it’s worth, people are terrible at actively predicting things, and whatever imagination is it will be a side-effect of our ability to engage with the real world, not part of how we engage with the real world.

 

Q3. Is this radical approach really denying the reality of cognitive representations, or just using a different descriptive language in which they don’t figure? In other words, can you and the cognitivists both be right?

If the radical hypothesis is right, then a lot of cognitive theories will be wrong. Those theories all assume that information comes into the brain, is processed by representations and then output as behaviour. If we successfully replace representations with information, all those theories will be telling the wrong story. ‘Interacting with information’ is a completely different job description for the brain than ‘building models of the world’. This is another reason why it’s ‘radical’.

 

Q4. Even if I concede that you can think of the mind like this, can you convince me that I should? Why is it useful? What does this approach do for cognitive science that the conventional approach isn’t or cant’?

There are two reasons, I think. The first is empirical; this approach works very, very well. Whenever a researcher works through a problem using this approach, they find robust answers that stand up to extended scrutiny in the lab. These solutions then make novel predictions that also perform well  – examples are topics like the outfielder problem and the A-not-B error [see below for references]. Cognitive psychology is filled with small, difficult to replicate effects; this is actually a hint that we aren’t asking the right questions. Radical embodied cognitive science tends to produce large, robust and interpretable effects which I take as a hint that our questions are closer to the mark.

The second is theoretical. The major problem with representations is that it’s not clear where they get their content from. Representations supposedly encode knowledge about the world that we use to make inferences to support perception, etc. But if we have such poor perceptual contact with the world that we need representations, how did we ever get access to the knowledge we needed to encode? This grounding problem is a disaster. Radical embodiment solves it by never creating it in the first place – we are in excellent perceptual contact with our environments, so there are no gaps for representations to fill, therefore no representations that need content.

 

Q5. Who should we be reading to get an idea of this approach?

‘Beyond the Brain’ by Louise Barrett. It’s accessible and full of great stuff.

‘Radical Embodied Cognitive Science’ by Tony Chemero. It’s clear and well written but it’s pitched at trained scientists more than the generally interested lay person.

‘Embodied Cognition’ by Lawrence Shapiro that clearly lays out all the various flavours of ‘embodied cognition’. My work is the ‘replacement’ hypothesis.

‘The Ecological Approach to Visual Perception’ by James J Gibson is an absolute masterpiece and the culmination of all his empirical and theoretical work.

I run a blog at http://psychsciencenotes.blogspot.co.uk/ with Sabrina Golonka where we discuss all this a lot, and we tweet @PsychScientists. We’ve also published a few papers on this, the most relevant of which is ‘Embodied Cognition is Not What You Think It Is

 

Q6. And finally, can you point us to a few blog posts you’re proudest of which illustrate this way of looking at the world

What Else Could It Be? (where Sabrina looks at the question, what if the brain is not a computer?)

Mirror neurons, or, What’s the matter with neuroscience? (how the traditional model can get you into trouble)

Prospective Control – The Outfielder problem (an example of the kind of research questions we ask)

The celebrity analysis that killed celebrity analysis

Most ‘psy’ professionals are banned by their codes of conduct from conducting ‘celebrity analysis’ and commenting on the mental state of specific individuals in the media. This is a sensible guideline but I didn’t realise it was triggered by a specific event.

Publicly commenting on a celebrity’s psychological state is bad form. If you’ve worked with them professionally, you’re likely bound by confidentiality, if you’ve not, you probably don’t know what you’re talking about and doing so in the media is likely to do them harm.

Despite this, it happens surprisingly often, usually by ‘celebrity psychologists’ in gossip columns and third-rate TV. Sadly, I don’t know of a single case where a professional organisation has tried to discipline the professional for doing so – although it must be said that mostly it’s done by self-appointed ‘experts’ rather than actual psychologists.

A new article in Journal of the American Academy of Psychiatry and the Law traced the history of how this form of ‘celebrity analysis’ first got banned in the US under the ‘Goldwater Rule’.

The Goldwater Rule stemmed from a scandal surrounding a 1964 publication in Fact magazine that included anonymous psychiatric opinions commenting on Senator Barry Goldwater‘s psychological fitness to be President of the United States. Fact, a short-lived magazine published in the 1960s, carried opinionated articles that covered a broad range of controversial topics. In the 1964 September/October issue entitled, “The Unconscious of a Conservative: A Special Issue on the Mind of Barry Goldwater,” the opinions of over 1,800 psychiatrists commenting on Goldwater’s psychological fitness were published…

Of the 2,417 respondents, 571 deferred from providing comments, 657 responded that Goldwater was fit to be president, and 1,189 responded that he was not fit. None of the psychiatrists whose comments were published had examined Goldwater, however, and none had permission from him to issue their comments publicly. In the article, Goldwater was described with comments including “lack of maturity”, “impulsive”, “unstable”, “megalomaniac”, “very dangerous man”, “obsessive-compulsive neurosis”, and “suffering a chronic psychosis”… Much was made of two nervous breakdowns allegedly suffered by Goldwater, and there was commentary warning that he might launch a nuclear attack if placed under a critical amount of stress as president.

Goldwater responded by bringing libel action against Ralph Ginzburg, Warren Boroson, and Fact… The United States District Court for the Southern District of New York returned a verdict in favor of the senator… The AMA and APA immediately condemned the remarks made in the Fact article after its publication. Individual psychiatrists also spoke out against the ethics of the published comments.

Most people who are subject to ‘celebrity analysis’ don’t have the luxury of bringing libel suits to defend themselves but it’s probably worth remembering that if someone is seeming to give a professional opinion on someone’s psychological state whom they’ve never met, they’re probably talking rubbish.
 

Link to article on ‘Psychiatrists Who Interact With the Media’

Evidence based debunking

Fed up with futile internet arguments, a bunch of psychologists investigated how best to correct false ideas. Tom Stafford discovers how to debunk properly.

We all resist changing our beliefs about the world, but what happens when some of those beliefs are based on misinformation? Is there a right way to correct someone when they believe something that’s wrong?

Stephen Lewandowsky and John Cook set out to review the science on this topic, and even carried out a few experiments of their own. This effort led to their “Debunker’s Handbook“, which gives practical, evidence-based techniques for correcting misinformation about, say, climate change or evolution. Yet the findings apply to any situation where you find the facts are falling on deaf ears.

The first thing their review turned up is the importance of “backfire effects” – when telling people that they are wrong only strengthens their belief. In one experiment, for example, researchers gave people newspaper corrections that contradicted their views and politics, on topics ranging from tax reform to the existence of weapons of mass destruction. The corrections were not only ignored – they entrenched people’s pre-existing positions.

Backfire effects pick up strength when you have no particular reason to trust the person you are talking to. This perhaps explains why climate sceptics with more scientific education tend to be the most sceptical that humans are causing global warming.

The irony is that understanding backfire effects requires that we debunk a false understanding of our own. Too often, argue Lewandowsky and Cook, communicators assume a ‘deficit model’ in their interactions with the misinformed. This is the idea that we have the right information, and all we need to do to make people believe is to somehow “fill in” the deficit in other people’s understanding. Just telling people the evidence for the truth will be enough to replace their false beliefs. Beliefs don’t work like that.

Psychological factors affect how we process information – such as what we already believe, who we trust and how we remember. Debunkers need to work with this, rather than against if they want the best chance of being believed.

The most important thing is to provide an alternative explanation. An experiment by Hollryn Johnson and Colleen Seifert, shows how to persuade people better. These two psychologists recruited participants to listen to news reports about a fictional warehouse fire, and then answer some comprehension questions.

Some of the participants were told that the fire was started by a short circuit in a closet near some cylinders containing potentially explosive gas. Yet when this information was corrected – by saying the closet was empty – they still clung to the belief.

A follow-up experiment showed the best way to effectively correct such misinformation. The follow-up was similar to the first experiment, except that it involved participants who were given a plausible alternative explanation: that evidence was found that arson caused the fire. It was only those who were given a plausible alternative that were able to let go of the misinformation about the gas cylinders.

Lewandowsky and Cook argue that experiments like these show the dangers of arguing against a misinformed position. If you try and debunk a myth, you may end up reinforcing that belief, strengthening the misinformation in people’s mind without making the correct information take hold.

What you must do, they argue, is to start with the plausible alternative (that obviously you believe is correct). If you must mention a myth, you should mention this second, and only after clearly warning people that you’re about to discuss something that isn’t true.

This debunking advice is also worth bearing in mind if you find yourself clinging to your own beliefs in the face of contradictory facts. You can’t be right all of the time, after all.

Read more about the best way to win an argument.

If you have an everyday psychological phenomenon you’d like to see written about in these columns please get in touch @tomstafford or ideas@idiolect.org.uk. Thanks to Ullrich Ecker for advice on this topic.

This is my BBC Future column from last week, original here

Implicit racism in academia

teacher-309533_640Subtle racism is prevalent in US and UK universities, according to a new paper commissioned by the Leadership Foundation for Higher Education and released last week, reports The Times Higher Education.

Black professors surveyed for the paper said they were treated differently than white colleagues in the form of receiving less eye contact or requests for their opinion, that they felt excluded in meetings and experienced undermining of their work. “I have to downplay my achievements sometimes to be accepted” said one academic, explaining that colleagues that didn’t expect a black woman to be clever and articulate. Senior managers often dismiss racist incidents as conflicts of personalities or believe them to be exaggerated, found the paper.

And all this in institutions where almost all staff would say they are not just “not racist” but where many would say they were actively committed to fighting prejudice.

This seems like a clear case of the operation of implicit biases – where there is a contradiction between people’s egalitarian beliefs and their racist actions. Implicit biases are an industry in psychology, where tools such as the implicit association test (IAT) are used to measure them. The IAT is a fairly typical cognitive psychology-type study: individuals sit in front of a computer and the speed of their reactions to stimuli are measured (the stimuli are things like faces of people with different ethnicities, which is how we get out a measure of implicit prejudice).

The LFHE paper is a nice opportunity to connect this lab measure with the reality of implicit bias ‘in the wild’. In particular, along with some colleagues, I have been interested in exactly what an implicit bias, is, psychologically.

Commonly, implicit biases are described as if they are unconscious or somehow outside of the awareness of those holding them. Unfortunately, this hasn’t been shown to be the case (in fact the opposite may be true – there’s some evidence that people can predict their IAT scores fairly accurately). Worse, the very idea of being unaware of a bias is badly specified. Does ‘unaware’ mean you aren’t aware of your racist feelings? Of your racist behaviour? Of that the feelings, in this case, have produced the behaviour?

The racist behaviours reported in the paper – avoiding eye-contact, assuming that discrimination is due to personalities and not race, etc – could all work at any or all of these levels of awareness. Although the behaviours are subtle, and contradict people’s expressed, anti-racist, opinions, the white academics could still be completely aware. They could know that black academics make them feel awkward or argumentative, and know that this is due to their race. Or they could be completely unaware. They could know that they don’t trust the opinions of certain academics, for example, but not realise that race is a factor in why they feel this way.

Just because the behaviour is subtle, or the psychological phenomenon is called ‘implicit’, doesn’t mean we can be certain about what people really know about it. The real value in the notion of implicit bias is that it reminds us that prejudice can exist in how we behave, not just in what we say and believe.

Full disclosure: I am funded by the Leverhulme Trust to work on project looking at the philosophy and psychology of implicit bias . This post is cross-posted on the project blog. Run your own IAT with our open-source code: Open-IAT!

A thought lab in the sun

Neuroscientist Karl Friston, being an absolute champ, in an interview in The Lancet Psychiatry

“I get up very late, I go and smoke my pipe in the conservatory, hopefully in the sunshine with a nice cup of coffee, and have thoughts until I can raise the energy to have a bath. I don’t normally get to work until mid day.”

I have to say, I have a very similar approach which is getting up very early, drinking Red Bull, not having any thoughts, and raising the energy to catch a bus to an inpatient ward.

The man clearly doesn’t know the good life when he sees it.

The Lancet Psychiatry is one of the new speciality journals from the big names in medical publishing.

It seems to be publishing material from the correspondence and ‘insight’ sections (essays and the like) without a paywall, so there’s often plenty for the general reader to catch up on. It also has a podcast which is aimed at mental health professionals.
 

Link to interview with Karl Friston.

The best way to win an argument

How do you change someone’s mind if you think you are right and they are wrong? Psychology reveals the last thing to do is the tactic we usually resort to.

You are, I’m afraid to say, mistaken. The position you are taking makes no logical sense. Just listen up and I’ll be more than happy to elaborate on the many, many reasons why I’m right and you are wrong. Are you feeling ready to be convinced?

Whether the subject is climate change, the Middle East or forthcoming holiday plans, this is the approach many of us adopt when we try to convince others to change their minds. It’s also an approach that, more often than not, leads to the person on the receiving end hardening their existing position. Fortunately research suggests there is a better way – one that involves more listening, and less trying to bludgeon your opponent into submission.

A little over a decade ago Leonid Rozenblit and Frank Keil from Yale University suggested that in many instances people believe they understand how something works when in fact their understanding is superficial at best. They called this phenomenon “the illusion of explanatory depth“. They began by asking their study participants to rate how well they understood how things like flushing toilets, car speedometers and sewing machines worked, before asking them to explain what they understood and then answer questions on it. The effect they revealed was that, on average, people in the experiment rated their understanding as much worse after it had been put to the test.

What happens, argued the researchers, is that we mistake our familiarity with these things for the belief that we have a detailed understanding of how they work. Usually, nobody tests us and if we have any questions about them we can just take a look. Psychologists call this idea that humans have a tendency to take mental short cuts when making decisions or assessments the “cognitive miser” theory.

Why would we bother expending the effort to really understand things when we can get by without doing so? The interesting thing is that we manage to hide from ourselves exactly how shallow our understanding is.

It’s a phenomenon that will be familiar to anyone who has ever had to teach something. Usually, it only takes the first moments when you start to rehearse what you’ll say to explain a topic, or worse, the first student question, for you to realise that you don’t truly understand it. All over the world, teachers say to each other “I didn’t really understand this until I had to teach it”. Or as researcher and inventor Mark Changizi quipped: “I find that no matter how badly I teach I still learn something”.

Explain yourself

Research published last year on this illusion of understanding shows how the effect might be used to convince others they are wrong. The research team, led by Philip Fernbach, of the University of Colorado, reasoned that the phenomenon might hold as much for political understanding as for things like how toilets work. Perhaps, they figured, people who have strong political opinions would be more open to other viewpoints, if asked to explain exactly how they thought the policy they were advocating would bring about the effects they claimed it would.

Recruiting a sample of Americans via the internet, they polled participants on a set of contentious US policy issues, such as imposing sanctions on Iran, healthcare and approaches to carbon emissions. One group was asked to give their opinion and then provide reasons for why they held that view. This group got the opportunity to put their side of the issue, in the same way anyone in an argument or debate has a chance to argue their case.

Those in the second group did something subtly different. Rather that provide reasons, they were asked to explain how the policy they were advocating would work. They were asked to trace, step by step, from start to finish, the causal path from the policy to the effects it was supposed to have.

The results were clear. People who provided reasons remained as convinced of their positions as they had been before the experiment. Those who were asked to provide explanations softened their views, and reported a correspondingly larger drop in how they rated their understanding of the issues. People who had previously been strongly for or against carbon emissions trading, for example, tended to became more moderate – ranking themselves as less certain in their support or opposition to the policy.

So this is something worth bearing in mind next time you’re trying to convince a friend that we should build more nuclear power stations, that the collapse of capitalism is inevitable, or that dinosaurs co-existed with humans 10,000 years ago. Just remember, however, there’s a chance you might need to be able to explain precisely why you think you are correct. Otherwise you might end up being the one who changes their mind.

This is my BBC Future column from last week. The original is here.

Research Digest #3: Getting to grips with implicit bias

My third and final post at the BPS Research Digest is now up: Getting to grips with implicit bias. Here’s the intro:

Implicit attitudes are one of the hottest topics in social psychology. Now a massive new study directly compares methods for changing them. The results are both good and bad for those who believe that some part of prejudice is our automatic, uncontrollable, reactions to different social groups.

All three studies I covered (#1, #2, #3) use large behavioural datasets, something I’m particularly keen on in my own work.

Link:  Getting to grips with implicit bias

Lou Reed has left the building

Chronicler of the wild side, Lou Reed, has died. Reed was particularly notable for students of human nature for his descriptions of drugs, madness and his own experience of psychiatry.

We’ve touched on his outrageous performance to the New York Society for Clinical Psychiatry before and his songs about or featuring drug use are legendary.

But there was one song that was particularly notable – not least for describing from his own experience of being ‘treated’ for homosexuality with electroshock therapy when he was a teenager.

Kill Your Sons, released in 1974 (audio), is just a straight-out attack on the psychiatrists that treated him:

All your two-bit psychiatrists
are giving you electroshock
They said, they’d let you live at home with mom and dad
instead of mental hospitals
But every time you tried to read a book
you couldn’t get to page 17
‘Cause you forgot where you were
so you couldn’t even read

Here Reed describes the effects on memory that are common just after electroconvulsive therapy. In this case, forgetting what you’ve just read.

The last verse also describes some of his other contacts with psychiatry, mentioning specific psychiatric clinics and medications:

Creedmore treated me very good
but Paine Whitney was even better
And when I flipped out on PHC
I was so sad, I didn’t even get a letter
All of the drugs, that we took
it really was lots of fun
But when they shoot you up with Thorazine on crystal smoke
you choke like a son of a gun

The last line seems to refer to the effect of being given a dopamine-inhibiting antipsychotic when you’re on a dopamine boosting amphetamine – presumably after being taken to a psychiatric clinic while still high. Not a pleasant comedown I would imagine.

I have no idea what ‘PHC’ refers to, though. I’m guessing it’s a psychiatric treatment from the 60s.

It’s interesting that the song was released the year after homosexuality was removed from the DSM in 1973, although it’s never been clear whether this was intentional on Reed’s part or not.
 

Link to YouTube audio of Kill Your Sons.

Race perception isn’t automatic

Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.

For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.

When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.

Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.

Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.

Powerful force

The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).

Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.

It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.

So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).

Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.

What does it take to spark prejudice?

Short answer: surprisingly little. Continuing the theme of revisiting classic experiments in psychology, last week’s BBC Future column was on Tajfel’s Minimal Group Paradigm. The original is here. Next week we’re going to take this foundation and look at some evolutionary psychology of racism (hint: it won’t be what you’d expect).

How easy is it for the average fair-minded person to form biased, preconceived views within groups? Surprisingly easy, according to psychology studies.

One of the least charming but most persistent aspects of human nature is our capacity to hate people who are different. Racism, sexism, ageism, it seems like all the major social categories come with their own “-ism”, each fuelled by regrettable prejudice and bigotry.

Our tendency for groupness appears to be so strong there seems little more for psychology to teach us. It’s not as if we need it proven that favouring our group over others is a common part of how people think – history provides all the examples we need. But one psychologist, Henri Tajfel, taught us something important. He showed exactly how little encouragement we need to treat people in a biased way because of the group they are in.

Any phenomenon like this in the real world comes entangled with a bunch of other, complicating phenomenon. When we see prejudice in the everyday world it is hard to separate out psychological biases from the effects of history, culture and even pragmatism (sometimes people from other groups really are out to get you).

As a social psychologist, Tajfel was interested in the essential conditions of group prejudice. He wanted to know what it took to turn the average fair-minded human into their prejudiced cousin.

He wanted to create a microscope for looking at how we think when we’re part of a group, even when that group has none of the history, culture or practical importance that groups normally do. To look at this, he devised what has become known as the “minimal group paradigm

The minimal group paradigm works like this: participants in the experiment are divided into groups on some arbitrary basis. Maybe eye-colour, maybe what kind of paintings they like, or even by tossing a coin. It doesn’t matter what the basis for group membership is, as long as everyone gets a group and knows what it is. After being told they are in a group, participants are divided up so that they are alone when they make a series of choices about how rewards will be shared among other people in the groups. From this point on, group membership is entirely abstract. Nobody else can be seen, and other group members are referred to by an anonymous number. Participants make choices such as “Member Number 74 (group A) to get 10 points and Member 44 (group B) to get 8 points”, versus “Member Number 74 (group A) to get 2 points and Member 44 (group B) to get 6 points”, where the numbers are points which translate into real money.

You won’t be surprised to learn that participants show favouritism towards their own group when dividing the money. People in group A were more likely to choose the first option I gave above, rather than the second. What is more surprising is that people show some of this group favouritism even when it ends up costing them points – so people in group B sometimes choose the second option, or options like it, even though it provides fewer points than the first option. People tend to opt for the maximum total reward (as you’d expect from the fair-minded citizen), but they also show a tendency to maximise the difference between the groups (what you’d expect from the prejudiced cousin).

The effect may be small, but this is a situation where the groups have been plucked out of the air by the experimenters. Every participant knows which group he or she is in, but they also know that they weren’t in this group before they started the experiment, that their assignment was arbitrary or completely random, and that the groups aren’t going to exist in any meaningful way after the experiment. They also know that their choices won’t directly affect them (they are explicitly told that they won’t be given any choices to make about themselves). Even so, this situation is enough to evoke favouritism.

So, it seems we’ll take the most minimal of signs as a cue to treat people differently according to which group they are in. Tajfel’s work suggests that in-group bias is as fundamental to thinking as the act of categorisations itself. If we want to contribute to a fairer world we need to be perpetually on guard to avoid letting this instinct run away with itself.

BBC Column: Why cyclists enrage car drivers

Here is my latest BBC Future column. The original is here. This one proved to be more than usually controversial, not least because of some poorly chosen phrasing from yours truly. This is an updated version which makes what I’m trying to say clearer. If you think that I hate cyclists, or my argument relies on the facts of actual law breaking (by cyclists or drivers), or that I am making a claim about the way the world ought to be (rather than how people see it), then please check out this clarification I published on my personal blog after a few days of feedback from the column. One thing the experience has convinced me of is that cycling is a very emotional issue, and one people often interpret in very moral terms.

It’s not simply because they are annoying, argues Tom Stafford, it’s because they trigger a deep-seated rage within us by breaking the moral order of the road.

 

Something about cyclists seems to provoke fury in other road users. If you doubt this, try a search for the word “cyclist” on Twitter. As I write this one of the latest tweets is this: “Had enough of cyclists today! Just wanna ram them with my car.” This kind of sentiment would get people locked up if directed against an ethnic minority or religion, but it seems to be fair game, in many people’s minds, when directed against cyclists. Why all the rage?

I’ve got a theory, of course. It’s not because cyclists are annoying. It isn’t even because we have a selective memory for that one stand-out annoying cyclist over the hundreds of boring, non-annoying ones (although that probably is a factor). No, my theory is that motorists hate cyclists because they offend the moral order.

Driving is a very moral activity – there are rules of the road, both legal and informal, and there are good and bad drivers. The whole intricate dance of the rush-hour junction only works because everybody knows the rules and follows them: keeping in lane; indicating properly; first her turn, now mine, now yours. Then along come cyclists, innocently following what they see as the rules of the road, but doing things that drivers aren’t allowed to: overtaking queues of cars, moving at well below the speed limit or undertaking on the inside.

You could argue that driving is like so much of social life, it’s a game of coordination where we have to rely on each other to do the right thing. And like all games, there’s an incentive to cheat. If everyone else is taking their turn, you can jump the queue. If everyone else is paying their taxes you can dodge them, and you’ll still get all the benefits of roads and police.

In economics and evolution this is known as the “free rider problem”; if you create a common benefit  – like taxes or orderly roads – what’s to stop some people reaping the benefit without paying their dues? The free rider problem creates a paradox for those who study evolution, because in a world of selfish genes it appears to make cooperation unlikely. Even if a bunch of selfish individuals (or genes) recognise the benefit of coming together to co-operate with each other, once the collective good has been created it is rational, in a sense, for everyone to start trying to freeload off the collective. This makes any cooperation prone to collapse. In small societies you can rely on cooperating with your friends, or kin, but as a society grows the problem of free-riding looms larger and larger.

Social collapse

Humans seem to have evolved one way of enforcing order onto potentially chaotic social arrangements. This is known as “altruistic punishment”, a term used by Ernst Fehr and Simon Gachter in a landmark paper published in 2002 [4]. An altruistic punishment is a punishment that costs you as an individual, but doesn’t bring any direct benefit. As an example, imagine I’m at a football match and I see someone climb in without buying a ticket. I could sit and enjoy the game (at no cost to myself), or I could try to find security to have the guy thrown out (at the cost of missing some of the game). That would be altruistic punishment.

Altruistic punishment, Fehr and Gachter reasoned, might just be the spark that makes groups of unrelated strangers co-operate. To test this they created a co-operation game played by constantly shifting groups of volunteers, who never meet – they played the game from a computer in a private booth. The volunteers played for real money, which they knew they would take away at the end of the experiment. On each round of the game each player received 20 credits, and could choose to contribute up to this amount to a group project. After everyone had chipped in (or not), everybody (regardless of investment) got 40% of the collective pot.

Under the rules of the game, the best collective outcome would be if everyone put in all their credits, and then each player would get back more than they put in. But the best outcome for each individual was to free ride – to keep their original 20 credits, and also get the 40% of what everybody else put in. Of course, if everybody did this then that would be 40% of nothing.

In this scenario what happened looked like a textbook case of the kind of social collapse the free rider problem warns of. On each successive turn of the game, the average amount contributed by players went down and down. Everybody realised that they could get the benefit of the collective pot without the cost of contributing. Even those who started out contributing a large proportion of their credits soon found out that not everybody else was doing the same. And once you see this it’s easy to stop chipping in yourself – nobody wants to be the sucker.

Rage against the machine

A simple addition to the rules reversed this collapse of co-operation, and that was the introduction of altruistic punishment. Fehr and Gachter allowed players to fine other players credits, at a cost to themselves. This is true altruistic punishment because the groups change after each round, and the players are anonymous. There may have been no direct benefit to fining other players, but players fined often and they fined hard – and, as you’d expect, they chose to fine other players who hadn’t chipped in on that round. The effect on cooperation was electric. With altruistic punishment, the average amount each player contributed rose and rose, instead of declining. The fine system allowed cooperation between groups of strangers who wouldn’t meet again, overcoming the challenge of the free rider problem.

How does this relate to why motorists hate cyclists? The key is in a detail from that classic 2002 paper. Did the players in this game sit there calmly calculating the odds, running game theory scenarios in their heads and reasoning about cost/benefit ratios? No, that wasn’t the immediate reason people fined players. They dished out fines because they were mad as hell. Fehr and Gachter, like the good behavioural experimenters they are, made sure to measure exactly how mad that was, by asking players to rate their anger on a scale of one to seven in reaction to various scenarios. When players were confronted with a free-rider, almost everyone put themselves at the upper end of the anger scale. Fehr and Gachter describe these emotions as a “proximate mechanism”. This means that evolution has built into the human mind a hatred of free-riders and cheaters, which activates anger when we confront people acting like this – and it is this anger which prompts altruistic punishment. In this way, the emotion is evolution’s way of getting us to overcome our short-term self-interest and encourage collective social life.

So now we can see why there is an evolutionary pressure pushing motorists towards hatred of cyclists. Deep within the human psyche, fostered there because it helps us co-ordinate with strangers and so build the global society that is a hallmark of our species, is an anger at people who break the rules, who take the benefits without contributing to the cost. And cyclists trigger this anger when they use the roads but don’t follow the same rules as cars.

Now cyclists reading this might think “but the rules aren’t made for us – we’re more vulnerable, discriminated against, we shouldn’t have to follow the rules.” Perhaps true, but irrelevant when other road-users see you breaking rules they have to keep. Maybe the solution is to educate drivers that cyclists are playing an important role in a wider game of reducing traffic and pollution. Or maybe we should just all take it out on a more important class of free-riders, the tax-dodgers.

BBC Column: Are we naturally good or bad?

My BBC Future column from last week. The original is here. I started out trying to write about research using economic games with apes and monkeys but I got so bogged down in the literature I switched to this neat experiment instead. Ed Yong is a better man than me and wrote a brilliant piece about that research, which you can find here.

It’s a question humanity has repeatedly asked itself, and one way to find out is to take a closer look at the behaviour of babies.… and use puppets.

Fundamentally speaking, are humans good or bad? It’s a question that has repeatedly been asked throughout humanity. For thousands of years, philosophers have debated whether we have a basically good nature that is corrupted by society, or a basically bad nature that is kept in check by society. Psychology has uncovered some evidence which might give the old debate a twist.

One way of asking about our most fundamental characteristics is to look at babies. Babies’ minds are a wonderful showcase for human nature. Babies are humans with the absolute minimum of cultural influence – they don’t have many friends, have never been to school and haven’t read any books. They can’t even control their own bowels, let alone speak the language, so their minds are as close to innocent as a human mind can get.

The only problem is that the lack of language makes it tricky to gauge their opinions. Normally we ask people to take part in experiments, giving them instructions or asking them to answer questions, both of which require language. Babies may be cuter to work with, but they are not known for their obedience. What’s a curious psychologist to do?

Fortunately, you don’t necessarily have to speak to reveal your opinions. Babies will reach for things they want or like, and they will tend to look longer at things that surprise them. Ingenious experiments carried out at Yale University in the US used these measures to look at babies’ minds. Their results suggest that even the youngest humans have a sense of right and wrong, and, furthermore, an instinct to prefer good over evil.

How could the experiments tell this? Imagine you are a baby. Since you have a short attention span, the experiment will be shorter and loads more fun than most psychology experiments. It was basically a kind of puppet show; the stage a scene featuring a bright green hill, and the puppets were cut-out shapes with stick on wobbly eyes; a triangle, a square and a circle, each in their own bright colours. What happened next was a short play, as one of the shapes tried to climb the hill, struggling up and falling back down again. Next, the other two shapes got involved, with either one helping the climber up the hill, by pushing up from behind, or the other hindering the climber, by pushing back from above.

Already something amazing, psychologically, is going on here. All humans are able to interpret the events in the play in terms of the story I’ve described. The puppets are just shapes. They don’t make human sounds or display human emotions. They just move about, and yet everyone reads these movements as purposeful, and revealing of their characters. You can argue that this “mind reading”, even in infants, shows that it is part of our human nature to believe in other minds.

Great expectations

What happened next tells us even more about human nature. After the show, infants were given the choice of reaching for either the helping or the hindering shape, and it turned out they were much more likely to reach for the helper. This can be explained if they are reading the events of the show in terms of motivations – the shapes aren’t just moving at random, but they showed to the infant that the shape pushing uphill “wants” to help out (and so is nice) and the shape pushing downhill “wants” to cause problems (and so is nasty).

The researchers used an encore to confirm these results. Infants saw a second scene in which the climber shape made a choice to move towards either the helper shape or the hinderer shape. The time infants spent looking in each of the two cases revealed what they thought of the outcome. If the climber moved towards the hinderer the infants looked significantly longer than if the climber moved towards the helper. This makes sense if the infants were surprised when the climber approached the hinderer. Moving towards the helper shape would be the happy ending, and obviously it was what the infant expected. If the climber moved towards the hinderer it was a surprise, as much as you or I would be surprised if we saw someone give a hug to a man who had just knocked him over.

The way to make sense of this result is if infants, with their pre-cultural brains had expectations about how people should act. Not only do they interpret the movement of the shapes as resulting from motivations, but they prefer helping motivations over hindering ones.

This doesn’t settle the debate over human nature. A cynic would say that it just shows that infants are self-interested and expect others to be the same way. At a minimum though, it shows that tightly bound into the nature of our developing minds is the ability to make sense of the world in terms of motivations, and a basic instinct to prefer friendly intentions over malicious ones. It is on this foundation that adult morality is built.

BBC Future Column: Why is it so hard to give good directions?

My BBC Future column from last week. Original here.

Psychologically speaking it is a tricky task, because our minds find it difficult to appreciate how the world looks to someone who doesn’t know it yet.

We’ve all been there – the directions sounded so clear when we were told them. Every step of the journey seemed obvious, we thought we had understood the directions perfectly. And yet here we are miles from anywhere, after dark, in a field arguing about whether we should have gone left or right at the last turn, whether we’re going to have to sleep here now, and exactly whose fault it is.

The truth is we shouldn’t be too hard on ourselves. Psychologically speaking giving good directions is a particularly difficult task.

The reason we find it hard to give good directions is because of the “curse of knowledge”, a psychological quirk whereby, once we have learnt something, we find it hard to appreciate how the world looks to someone who doesn’t know it yet. We don’t just want people to walk a mile in our shoes, we assume they already know the route. Once we know the way to a place we don’t need directions, and descriptions like “its the left about halfway along” or “the one with the little red door” seem to make full and complete sense.

But if you’ve never been to a place before, you need more than a description of a place; you need an exact definition, or a precise formula for finding it. The curse of knowledge is the reason why, when I had to search for a friend’s tent in a field, their advice of “it’s the blue one” seemed perfectly sensible to them and was completely useless for me, as I stood there staring blankly at hundreds of blue tents.

This same quirk is why teaching is so difficult to do well. Once you are familiar with a topic it is very hard to understand what someone who isn’t familiar with it needs to know. The curse of knowledge isn’t a surprising flaw in our mental machinery – really it is just a side effect of our basic alienation from each other. We all have different thoughts and beliefs, and we have no special access to each other’s minds. A lot of the time we can fake understanding by mentally simulating what we’d want in someone else’s position. We have thoughts along the lines of “I’d like it if there was one bagel left in the morning” and therefore conclude “so I won’t eat all the bagels before my wife gets up in the morning”. This shortcut allows us to appear considerate, without doing any deep thought about what other people really know and want.

“OK, now what?”

This will only get you so far. Some occasions call for a proper understanding of other people’s feelings and beliefs. Giving directions is one, but so is understanding myriad aspects of everyday conversation which involve feelings, jokes or suggestions. For illustration, consider the joke that some research has suggested may be the world’s funniest (although what exactly that means is another story):

 

Two hunters are out in the woods when one of them collapses. He doesn’t seem to be breathing and his eyes are glazed. The other guy whips out his phone and calls the emergency services. He gasps, “My friend is dead! What can I do?” The operator says “Calm down. I can help. First, let’s make sure he’s dead.” There is a silence, then a shot is heard. Back on the phone, the guy says “OK, now what?”

 

The joke is funny because you can appreciate that the hunter had two possible interpretations of the operator’s instructions, and chose the wrong one. To appreciate the interpretations you need to have a feel for what the operator and the hunter know and desire (and to be surprised when the hunter’s desire to do anything to help isn’t over-ruled by a desire keep his friend alive).

To do this mental simulation you recruit what psychologists call your “Theory of Mind”, the ability think about others’ beliefs and desires. Our skill at Theory of Mind is one of the things that distinguish humans from all other species – only chimpanzees seem to have anything approaching a true understanding that others’ might believe different things from themselves. Us humans, on the other hand, seem primed from early infancy to practice thinking about how other humans view the world.

The fact that the curse of knowledge exists tells us how hard a problem it is to think about other people’s minds. Like many hard cognitive problems – such as seeing, for example – the human brain has evolved specialist mechanisms which are dedicate to solving it for us, so that we don’t normally have to expend conscious effort. Most of the time we get the joke, just as most of the time we simply open our eyes and see the world.

The good news is that your Theory of Mind isn’t completely automatic – you can use deliberate strategies to help you think about what other people know. A good one when writing is simply to force yourself to check every term to see if it is jargon – something you’ve learnt the meaning of but not all your readers will know. Another strategy is to tell people what they can ignore, as well as what they need to know. This works well with directions (and results in instructions like “keep going until you see the red door. There’s a pink door, but that’s not it”)

With a few tricks like this, and perhaps some general practice, we can turn the concept of reading other people’s minds – what some psychologists call “mind mindfulness” – into a habit, and so improve our Theory of Mind abilities. (Something that most of us remember struggling hard to do in adolescence.) Which is a good thing, since good theory of mind is what makes a considerate partner, friend or co-worker – and a good giver of directions.

A culture shock for universal emotion

The Boston Globe looks at the increasing evidence against the idea that there are some universally expressed facial emotions.

The idea that some basic emotions are expressed universally and have an evolutionary basis was suggested by Darwin in his book The Expression of the Emotions in Man and Animals.

The concept was further explored by psychologist Paul Ekman who conducted cross-cultural research and reported that the expression of anger, disgust, fear, happiness, sadness and surprise were universal human characteristics.

However, these ideas have recently been challenged and a debate recently kicked off in an issue of Current Directions in Psychological Science and the Globe article does a great job of covering the fight and its fall out.

…psychologists Azim Shariff and Jessica Tracy detail accumulated evidence that they argue makes the case for an evolutionary view of emotional expressions [pdf]. Some, they say, may have evolved for a physiological purpose — widening the eyes with fright, for instance, to expand our peripheral vision. Others may have evolved as social signals. Meanwhile, in a commentary, Barrett lays out a point-by-point counterargument [pdf]. While humans evolved to express and interpret emotions, she contends, specific facial expressions are culturally learned.

Barrett believes that the universality of recognizing facial expressions is “an effect that can be easily deconstructed,” if, for instance, subjects are asked to give their own label to faces instead of choosing from a set of words. In another recent paper [pdf] in the same journal, she argues that a growing body of research shows our perception of facial expressions is highly dependent on context: People interpret facial expressions differently depending on situation, body language, familiarity with a person, and surrounding visual cues. Barrett’s own research has shown that language and vocabulary influence people’s perception of emotions. Others have found cultural differences in how people interpret the facial expressions of others — a study found that Japanese people, for instance, rely more than North Americans on the expressions of surrounding people to interpret a person’s emotional state.

A fascinating discussion that tackles a taken-for-granted psychological assumption that is now being challenged.
 

Link to Globe piece on culture and facial expression.

The peak experiences of Abraham Maslow

The New Atlantis has an in-depth biographical article on psychologist Abraham Maslow – one of the founders of humanistic psychology and famous for his ‘hierarchy of needs’.

Maslow is stereotypically associated with a kind of fluffy ‘love yourself’ psychology although the man himself was quite a skeptic of the mumbo jumbo that got associated with his work.

The association is not so much because of Maslow’s focus on self-actualization, a goal where we use our psychological potential to its fullest, but because of his association with the ‘human potential movement’ and the Esalen Institute.

Esalen had some quite laudable goals but ended up being a hot tub of flaky hippy therapies. If you want an idea of what we’re talking about, you perhaps won’t be surprised to learn that nude psychotherapy movement that we covered previously on Mind Hacks originated from the same place.

Maslow quickly got pissed off with half-baked people that he attracted and but sadly the stereotype stuck.

The man himself was far more complex, however, as was his remarkably profound work, and The New Atlantis article does a great job of bringing out the depth of his life and ideas. Recommended.
 

Link to article ‘Abraham Maslow and the All-American Self’.

Against the high cult of retreat

Depending on who you ask Naomi Weisstein is a perceptual neuroscientist, a rock n roll musician, a social critic, a comedian, or a fuck the patriarchy radical feminist.

You stick Weisstein’s name into Google Scholar and her most cited paper is ‘Psychology Constructs the Female’ – a searing critique of how 60s psychology pictured the female psyche – while her second most cited is a study published in Science on visual detection of line segments.

Although the topics are different, the papers are more alike than you’d first imagine.

Her article ‘Psychology Constructs the Female’ was originally published in 1968 and became an instant classic.

She looked at the then current theories of female psychology, and at the evidence that supported them, and shows that the theories are pitiful – largely based on personal opinion and idiosyncratic interpretations of weak or non-existent evidence.

Moreover, she shows that all known differences at the time could be accounted for by social context and what was expected of the participants, rather than their sex.

It’s a masterpiece of evidence-based scientific thinking when feminist psychology was, and to a large extent, still is, heavily influenced by postmodernism and poststructuralism – theories that suggest that there is no objective reality and science is just another social narrative that has female oppression built into its knowledge base.

Weisstein, who also had a huge impact on perceptual science, had little time for what she considered to be ‘fog’ and ‘paralysis’:

I’m still wearing my beanie hat, aren’t I? I don’t think I can take it off… Science (as opposed to the scientific establishment) will entertain hypothesis generated in any way: mystical, intuitive, experiential. It only asks us to make sure that our observations and replicable and our theories have some reasonable relation to other things we know to be true about the subject under study, that is to objective reality…

Whether or not there is objective reality is a 4000-year-old philosophical stalemate. The last I heard was that, like God, you cannot prove there is one and you cannot prove there is not one. It comes down to a religious and / or political choice. I believe that the current feminist rejection of universal truth is a political choice. Radical and confrontational as the feminist challenge to science may appear, it is in fact, a deeply conservative retreat…

Poststructuralist feminism is a high cult of retreat. Sometimes I think that, when the fashion passes, we will find many bodies, drowned in their own wordy words, like the Druids in the bogs.

A recent academic article looked back at Weisstein’s legacy and noted that she has been a powerful force in a feminist movement that typically rejects science as a useful approach.

But she was also a pioneer in simply being a high-flying female scientist when they were actively discouraged from getting involved.
 

Link to full text of ‘Psychology Constructs the Female’.