An instinct for fairness lurking within even the most competitive

It stings when life’s not fair – but what happens if it means we profit? As Tom Stafford writes, some people may perform unexpected self-sabotage.

Frans de Waal, a professor of primate behaviour at Emory University, is the unlikely star of a viral video. His academic’s physique, grey jumper and glasses aren’t the usual stuff of a YouTube sensation. But de Waal’s research with monkeys, and its implications for human nature, caught the imagination of millions of people.

It began with a TED talk in which de Waal showed the results of one experiment that involved paying two monkeys unequally (see video, below). Capuchin monkeys that lived together were taken to neighbouring cages and trained to hand over small stones in return for food rewards. The researchers found that a typical monkey would happily hand over stone after stone when it was rewarded for each exchange with a slice of cucumber.

But capuchin monkeys prefer grapes to cucumber slices. If the researchers paid one of the monkeys in grapes instead, the monkey in the neighbouring cage – previously happy to work for cucumber – became agitated and refused to accept payment in cucumber slices. What had once been acceptable soon became unacceptable when it was clear a neighbour was getting a better reward for the same effort.

The highlight of the video is when the poorly paid monkey throws the cucumber back at the lab assistant trying to offer it as a reward.

You don’t have to be a psychologist to know that humans can feel very much like the poorly paid monkey. Injustice stings. These results and others like them, argues de Waal, show that moral sentiments are part of our biological inheritance, a consequence of an ancestral life that was dominated by egalitarian group living – and the need for harmony between members of the group.

That’s a theory, and de Waal’s result definitely shows that our evolutionary cousins, the monkeys, are strongly influenced by social comparisons. But the experiment doesn’t really provide strong evidence that monkeys want justice. The underpaid monkey gets angry, but we’ve no evidence that the better-paid monkey is unhappy about the situation. In humans, by comparison, we can find stronger evidence that an instinct for fairness can lurk inside the psyche of even the most competitive of us.

The players in the National Basketball Association in the USA rank as some of the highest earning sportspeople in the world. In the 2007-08 season the best paid of them received salaries in excess of $20 million (£13.5 million), and more than 50 members of the league had salaries of $10 million (£6.7 million) or more.

The 2007-08 season is interesting because that is when psychologists Graeme Haynes and Thomas Gilovich reviewed recordings of more than 100 NBA games, looking for occasions that fouls were called by the referees when it was clear to the players that no foul had actually been committed. Whenever a foul is called, the wronged player gets a number of free throws – chances to score points for their team. Haynes and Gilovich were interested in how these ultra-competitive, highly paid sportsmen reacted to being awarded free throws when they knew that they didn’t really deserve them.

Missed shot

These guys had every incentive to make the most of the free throws, however unfairly gained: after all, they make their living from winning, and the points gained from free throws could settle a match. Yet Haynes and Gilovich found that players’ accuracy from unfairly awarded free throws was unusually low. It was down compared to the free throw league average, and down compared to the individual players’ free throw personal averages. Accuracy on unfairly awarded free throws was lowest when the player’s team was ahead and didn’t need the points so much. But tellingly, it was also lower than average when the team was behind and in need of points – whether honestly or dishonestly gained.

If players in one of the most competitive and best-paid sports can apparently be put off by guilt, it suggests to me that an instinct for fairness can survive even the most ruthless environments.

At the end of the monkey clip, de Waal jokes that the behaviour parallels the way people have staged protests against Wall Street, and the greed they see there. And he’s right that our discomfort with unequal pay may be as deeply set as the monkey’s.

Yet perhaps these feelings run even deeper. The analysis of the basketball players suggests that when we stand to benefit from injustices – even if they can help justify multi-million dollar salaries – some part of us is uncomfortable with the situation, and may even work to undermine that advantage.

So don’t give up on the bankers and the multi-millionaire athletes just yet.

This is my latest column for BBC Future. The original is here.

Radical embodied cognition: an interview with Andrew Wilson

adw_headshot_squareThe computational approach is the orthodoxy in psychological science. We try and understand the mind using the metaphors of information processing and the storage and retrieval of representations. These ideas are so common that it is easy to forget that there is any alternative. Andrew Wilson is on a mission to remind us that there is an alternative – a radical, non-representational, non-information processing take on what cognition is.

I sent him a few questions by email. After he answered these, and some follow up questions, we’ve both edited and agreed on the result, which you can read below.


Q1. Is it fair to say you are at odds with lots of psychology, theoretically? Can you outline why?

Psychology wants to understand what causes our behaviour. Cognitive psychology explanations are that behaviour is caused by internal states of the mind (or brain, if you like). These states are called mental representations, and they are models/simulations of the world that we use to figure out what to do and when to do it.

Cognitive psychology thinks we have representations because it assumes we have very poor sensory access to the world, e.g. vision supposedly begins with a patchy 2D image projected onto the back of the eye. We need these models to literally fill in the gaps by making an educated guess (‘inference’) about what caused those visual sensations.

My approach is called radical embodied cognitive psychology; ‘radical’ just means ‘no representations’. It is based on the work of James J Gibson. He was a perceptual psychologist who demonstrated that there is actually rich perceptual information about the world, and that we use this information. This is why perception and action are so amazingly successful most of the time, which is important because failures of perception have serious consequences for your health and wellbeing (e.g. falling on ice)

The most important consequence of this discovery is that when we have access to this information, we don’t need those internal models anymore. This then means that whatever the brain is doing, it’s not building models of the world in order to cause our behaviour. We are embedded in our environments and our behaviour is caused by the nature of that embedding (specifically, which information variables we are using for any given task).

So I ask very different questions than the typical psychologist: instead of ‘what mental model lets me solve this task?’ I ask ‘what information is there to support the observed behaviour and can I find evidence that we use it?’. When we get the right answer to the information question, we have great success in explaining and then predicting behaviour, which is actually the goal of psychology.


Q2. The idea that there are no mental representations is hard to get your head around. What about situations where behaviour seems to be based on things which aren’t there, like imagination, illusions or predictions?

First, saying that there are no mental representations is not saying that the brain is not up to something. This is a surprisingly common mistake, but I think it’s due to the fact cognitive psychologists have come to equate ‘brain activity’ with ‘representing’ and denying the latter means denying the former (see Is Embodied Cognition a No-Brainer?).

Illusions simply reveal how important it is to perception that we can move and explore. They are all based on a trick and they almost always require an Evil Psychologist™ lurking in the background. Specifically, illusions artificially restrict access to information so that the world looks like it’s doing one thing when it is really doing another. They only work if you don’t let people do anything to reveal the trick. Most visual illusions are revealed as such by exploring them, e.g by looking at them from a different perspective (e.g. the Ames Room).

Imagination and prediction are harder to talk about in this framework, but only because no one’s really tried. For what it’s worth, people are terrible at actively predicting things, and whatever imagination is it will be a side-effect of our ability to engage with the real world, not part of how we engage with the real world.


Q3. Is this radical approach really denying the reality of cognitive representations, or just using a different descriptive language in which they don’t figure? In other words, can you and the cognitivists both be right?

If the radical hypothesis is right, then a lot of cognitive theories will be wrong. Those theories all assume that information comes into the brain, is processed by representations and then output as behaviour. If we successfully replace representations with information, all those theories will be telling the wrong story. ‘Interacting with information’ is a completely different job description for the brain than ‘building models of the world’. This is another reason why it’s ‘radical’.


Q4. Even if I concede that you can think of the mind like this, can you convince me that I should? Why is it useful? What does this approach do for cognitive science that the conventional approach isn’t or cant’?

There are two reasons, I think. The first is empirical; this approach works very, very well. Whenever a researcher works through a problem using this approach, they find robust answers that stand up to extended scrutiny in the lab. These solutions then make novel predictions that also perform well  – examples are topics like the outfielder problem and the A-not-B error [see below for references]. Cognitive psychology is filled with small, difficult to replicate effects; this is actually a hint that we aren’t asking the right questions. Radical embodied cognitive science tends to produce large, robust and interpretable effects which I take as a hint that our questions are closer to the mark.

The second is theoretical. The major problem with representations is that it’s not clear where they get their content from. Representations supposedly encode knowledge about the world that we use to make inferences to support perception, etc. But if we have such poor perceptual contact with the world that we need representations, how did we ever get access to the knowledge we needed to encode? This grounding problem is a disaster. Radical embodiment solves it by never creating it in the first place – we are in excellent perceptual contact with our environments, so there are no gaps for representations to fill, therefore no representations that need content.


Q5. Who should we be reading to get an idea of this approach?

‘Beyond the Brain’ by Louise Barrett. It’s accessible and full of great stuff.

‘Radical Embodied Cognitive Science’ by Tony Chemero. It’s clear and well written but it’s pitched at trained scientists more than the generally interested lay person.

‘Embodied Cognition’ by Lawrence Shapiro that clearly lays out all the various flavours of ‘embodied cognition’. My work is the ‘replacement’ hypothesis.

‘The Ecological Approach to Visual Perception’ by James J Gibson is an absolute masterpiece and the culmination of all his empirical and theoretical work.

I run a blog at with Sabrina Golonka where we discuss all this a lot, and we tweet @PsychScientists. We’ve also published a few papers on this, the most relevant of which is ‘Embodied Cognition is Not What You Think It Is


Q6. And finally, can you point us to a few blog posts you’re proudest of which illustrate this way of looking at the world

What Else Could It Be? (where Sabrina looks at the question, what if the brain is not a computer?)

Mirror neurons, or, What’s the matter with neuroscience? (how the traditional model can get you into trouble)

Prospective Control – The Outfielder problem (an example of the kind of research questions we ask)

The celebrity analysis that killed celebrity analysis

Most ‘psy’ professionals are banned by their codes of conduct from conducting ‘celebrity analysis’ and commenting on the mental state of specific individuals in the media. This is a sensible guideline but I didn’t realise it was triggered by a specific event.

Publicly commenting on a celebrity’s psychological state is bad form. If you’ve worked with them professionally, you’re likely bound by confidentiality, if you’ve not, you probably don’t know what you’re talking about and doing so in the media is likely to do them harm.

Despite this, it happens surprisingly often, usually by ‘celebrity psychologists’ in gossip columns and third-rate TV. Sadly, I don’t know of a single case where a professional organisation has tried to discipline the professional for doing so – although it must be said that mostly it’s done by self-appointed ‘experts’ rather than actual psychologists.

A new article in Journal of the American Academy of Psychiatry and the Law traced the history of how this form of ‘celebrity analysis’ first got banned in the US under the ‘Goldwater Rule’.

The Goldwater Rule stemmed from a scandal surrounding a 1964 publication in Fact magazine that included anonymous psychiatric opinions commenting on Senator Barry Goldwater‘s psychological fitness to be President of the United States. Fact, a short-lived magazine published in the 1960s, carried opinionated articles that covered a broad range of controversial topics. In the 1964 September/October issue entitled, “The Unconscious of a Conservative: A Special Issue on the Mind of Barry Goldwater,” the opinions of over 1,800 psychiatrists commenting on Goldwater’s psychological fitness were published…

Of the 2,417 respondents, 571 deferred from providing comments, 657 responded that Goldwater was fit to be president, and 1,189 responded that he was not fit. None of the psychiatrists whose comments were published had examined Goldwater, however, and none had permission from him to issue their comments publicly. In the article, Goldwater was described with comments including “lack of maturity”, “impulsive”, “unstable”, “megalomaniac”, “very dangerous man”, “obsessive-compulsive neurosis”, and “suffering a chronic psychosis”… Much was made of two nervous breakdowns allegedly suffered by Goldwater, and there was commentary warning that he might launch a nuclear attack if placed under a critical amount of stress as president.

Goldwater responded by bringing libel action against Ralph Ginzburg, Warren Boroson, and Fact… The United States District Court for the Southern District of New York returned a verdict in favor of the senator… The AMA and APA immediately condemned the remarks made in the Fact article after its publication. Individual psychiatrists also spoke out against the ethics of the published comments.

Most people who are subject to ‘celebrity analysis’ don’t have the luxury of bringing libel suits to defend themselves but it’s probably worth remembering that if someone is seeming to give a professional opinion on someone’s psychological state whom they’ve never met, they’re probably talking rubbish.

Link to article on ‘Psychiatrists Who Interact With the Media’

Evidence based debunking

Fed up with futile internet arguments, a bunch of psychologists investigated how best to correct false ideas. Tom Stafford discovers how to debunk properly.

We all resist changing our beliefs about the world, but what happens when some of those beliefs are based on misinformation? Is there a right way to correct someone when they believe something that’s wrong?

Stephen Lewandowsky and John Cook set out to review the science on this topic, and even carried out a few experiments of their own. This effort led to their “Debunker’s Handbook“, which gives practical, evidence-based techniques for correcting misinformation about, say, climate change or evolution. Yet the findings apply to any situation where you find the facts are falling on deaf ears.

The first thing their review turned up is the importance of “backfire effects” – when telling people that they are wrong only strengthens their belief. In one experiment, for example, researchers gave people newspaper corrections that contradicted their views and politics, on topics ranging from tax reform to the existence of weapons of mass destruction. The corrections were not only ignored – they entrenched people’s pre-existing positions.

Backfire effects pick up strength when you have no particular reason to trust the person you are talking to. This perhaps explains why climate sceptics with more scientific education tend to be the most sceptical that humans are causing global warming.

The irony is that understanding backfire effects requires that we debunk a false understanding of our own. Too often, argue Lewandowsky and Cook, communicators assume a ‘deficit model’ in their interactions with the misinformed. This is the idea that we have the right information, and all we need to do to make people believe is to somehow “fill in” the deficit in other people’s understanding. Just telling people the evidence for the truth will be enough to replace their false beliefs. Beliefs don’t work like that.

Psychological factors affect how we process information – such as what we already believe, who we trust and how we remember. Debunkers need to work with this, rather than against if they want the best chance of being believed.

The most important thing is to provide an alternative explanation. An experiment by Hollryn Johnson and Colleen Seifert, shows how to persuade people better. These two psychologists recruited participants to listen to news reports about a fictional warehouse fire, and then answer some comprehension questions.

Some of the participants were told that the fire was started by a short circuit in a closet near some cylinders containing potentially explosive gas. Yet when this information was corrected – by saying the closet was empty – they still clung to the belief.

A follow-up experiment showed the best way to effectively correct such misinformation. The follow-up was similar to the first experiment, except that it involved participants who were given a plausible alternative explanation: that evidence was found that arson caused the fire. It was only those who were given a plausible alternative that were able to let go of the misinformation about the gas cylinders.

Lewandowsky and Cook argue that experiments like these show the dangers of arguing against a misinformed position. If you try and debunk a myth, you may end up reinforcing that belief, strengthening the misinformation in people’s mind without making the correct information take hold.

What you must do, they argue, is to start with the plausible alternative (that obviously you believe is correct). If you must mention a myth, you should mention this second, and only after clearly warning people that you’re about to discuss something that isn’t true.

This debunking advice is also worth bearing in mind if you find yourself clinging to your own beliefs in the face of contradictory facts. You can’t be right all of the time, after all.

Read more about the best way to win an argument.

If you have an everyday psychological phenomenon you’d like to see written about in these columns please get in touch @tomstafford or Thanks to Ullrich Ecker for advice on this topic.

This is my BBC Future column from last week, original here

Implicit racism in academia

teacher-309533_640Subtle racism is prevalent in US and UK universities, according to a new paper commissioned by the Leadership Foundation for Higher Education and released last week, reports The Times Higher Education.

Black professors surveyed for the paper said they were treated differently than white colleagues in the form of receiving less eye contact or requests for their opinion, that they felt excluded in meetings and experienced undermining of their work. “I have to downplay my achievements sometimes to be accepted” said one academic, explaining that colleagues that didn’t expect a black woman to be clever and articulate. Senior managers often dismiss racist incidents as conflicts of personalities or believe them to be exaggerated, found the paper.

And all this in institutions where almost all staff would say they are not just “not racist” but where many would say they were actively committed to fighting prejudice.

This seems like a clear case of the operation of implicit biases – where there is a contradiction between people’s egalitarian beliefs and their racist actions. Implicit biases are an industry in psychology, where tools such as the implicit association test (IAT) are used to measure them. The IAT is a fairly typical cognitive psychology-type study: individuals sit in front of a computer and the speed of their reactions to stimuli are measured (the stimuli are things like faces of people with different ethnicities, which is how we get out a measure of implicit prejudice).

The LFHE paper is a nice opportunity to connect this lab measure with the reality of implicit bias ‘in the wild’. In particular, along with some colleagues, I have been interested in exactly what an implicit bias, is, psychologically.

Commonly, implicit biases are described as if they are unconscious or somehow outside of the awareness of those holding them. Unfortunately, this hasn’t been shown to be the case (in fact the opposite may be true – there’s some evidence that people can predict their IAT scores fairly accurately). Worse, the very idea of being unaware of a bias is badly specified. Does ‘unaware’ mean you aren’t aware of your racist feelings? Of your racist behaviour? Of that the feelings, in this case, have produced the behaviour?

The racist behaviours reported in the paper – avoiding eye-contact, assuming that discrimination is due to personalities and not race, etc – could all work at any or all of these levels of awareness. Although the behaviours are subtle, and contradict people’s expressed, anti-racist, opinions, the white academics could still be completely aware. They could know that black academics make them feel awkward or argumentative, and know that this is due to their race. Or they could be completely unaware. They could know that they don’t trust the opinions of certain academics, for example, but not realise that race is a factor in why they feel this way.

Just because the behaviour is subtle, or the psychological phenomenon is called ‘implicit’, doesn’t mean we can be certain about what people really know about it. The real value in the notion of implicit bias is that it reminds us that prejudice can exist in how we behave, not just in what we say and believe.

Full disclosure: I am funded by the Leverhulme Trust to work on project looking at the philosophy and psychology of implicit bias . This post is cross-posted on the project blog. Run your own IAT with our open-source code: Open-IAT!

A thought lab in the sun

Neuroscientist Karl Friston, being an absolute champ, in an interview in The Lancet Psychiatry

“I get up very late, I go and smoke my pipe in the conservatory, hopefully in the sunshine with a nice cup of coffee, and have thoughts until I can raise the energy to have a bath. I don’t normally get to work until mid day.”

I have to say, I have a very similar approach which is getting up very early, drinking Red Bull, not having any thoughts, and raising the energy to catch a bus to an inpatient ward.

The man clearly doesn’t know the good life when he sees it.

The Lancet Psychiatry is one of the new speciality journals from the big names in medical publishing.

It seems to be publishing material from the correspondence and ‘insight’ sections (essays and the like) without a paywall, so there’s often plenty for the general reader to catch up on. It also has a podcast which is aimed at mental health professionals.

Link to interview with Karl Friston.

The best way to win an argument

How do you change someone’s mind if you think you are right and they are wrong? Psychology reveals the last thing to do is the tactic we usually resort to.

You are, I’m afraid to say, mistaken. The position you are taking makes no logical sense. Just listen up and I’ll be more than happy to elaborate on the many, many reasons why I’m right and you are wrong. Are you feeling ready to be convinced?

Whether the subject is climate change, the Middle East or forthcoming holiday plans, this is the approach many of us adopt when we try to convince others to change their minds. It’s also an approach that, more often than not, leads to the person on the receiving end hardening their existing position. Fortunately research suggests there is a better way – one that involves more listening, and less trying to bludgeon your opponent into submission.

A little over a decade ago Leonid Rozenblit and Frank Keil from Yale University suggested that in many instances people believe they understand how something works when in fact their understanding is superficial at best. They called this phenomenon “the illusion of explanatory depth“. They began by asking their study participants to rate how well they understood how things like flushing toilets, car speedometers and sewing machines worked, before asking them to explain what they understood and then answer questions on it. The effect they revealed was that, on average, people in the experiment rated their understanding as much worse after it had been put to the test.

What happens, argued the researchers, is that we mistake our familiarity with these things for the belief that we have a detailed understanding of how they work. Usually, nobody tests us and if we have any questions about them we can just take a look. Psychologists call this idea that humans have a tendency to take mental short cuts when making decisions or assessments the “cognitive miser” theory.

Why would we bother expending the effort to really understand things when we can get by without doing so? The interesting thing is that we manage to hide from ourselves exactly how shallow our understanding is.

It’s a phenomenon that will be familiar to anyone who has ever had to teach something. Usually, it only takes the first moments when you start to rehearse what you’ll say to explain a topic, or worse, the first student question, for you to realise that you don’t truly understand it. All over the world, teachers say to each other “I didn’t really understand this until I had to teach it”. Or as researcher and inventor Mark Changizi quipped: “I find that no matter how badly I teach I still learn something”.

Explain yourself

Research published last year on this illusion of understanding shows how the effect might be used to convince others they are wrong. The research team, led by Philip Fernbach, of the University of Colorado, reasoned that the phenomenon might hold as much for political understanding as for things like how toilets work. Perhaps, they figured, people who have strong political opinions would be more open to other viewpoints, if asked to explain exactly how they thought the policy they were advocating would bring about the effects they claimed it would.

Recruiting a sample of Americans via the internet, they polled participants on a set of contentious US policy issues, such as imposing sanctions on Iran, healthcare and approaches to carbon emissions. One group was asked to give their opinion and then provide reasons for why they held that view. This group got the opportunity to put their side of the issue, in the same way anyone in an argument or debate has a chance to argue their case.

Those in the second group did something subtly different. Rather that provide reasons, they were asked to explain how the policy they were advocating would work. They were asked to trace, step by step, from start to finish, the causal path from the policy to the effects it was supposed to have.

The results were clear. People who provided reasons remained as convinced of their positions as they had been before the experiment. Those who were asked to provide explanations softened their views, and reported a correspondingly larger drop in how they rated their understanding of the issues. People who had previously been strongly for or against carbon emissions trading, for example, tended to became more moderate – ranking themselves as less certain in their support or opposition to the policy.

So this is something worth bearing in mind next time you’re trying to convince a friend that we should build more nuclear power stations, that the collapse of capitalism is inevitable, or that dinosaurs co-existed with humans 10,000 years ago. Just remember, however, there’s a chance you might need to be able to explain precisely why you think you are correct. Otherwise you might end up being the one who changes their mind.

This is my BBC Future column from last week. The original is here.