What does it take to spark prejudice?

Short answer: surprisingly little. Continuing the theme of revisiting classic experiments in psychology, last week’s BBC Future column was on Tajfel’s Minimal Group Paradigm. The original is here. Next week we’re going to take this foundation and look at some evolutionary psychology of racism (hint: it won’t be what you’d expect).

How easy is it for the average fair-minded person to form biased, preconceived views within groups? Surprisingly easy, according to psychology studies.

One of the least charming but most persistent aspects of human nature is our capacity to hate people who are different. Racism, sexism, ageism, it seems like all the major social categories come with their own “-ism”, each fuelled by regrettable prejudice and bigotry.

Our tendency for groupness appears to be so strong there seems little more for psychology to teach us. It’s not as if we need it proven that favouring our group over others is a common part of how people think – history provides all the examples we need. But one psychologist, Henri Tajfel, taught us something important. He showed exactly how little encouragement we need to treat people in a biased way because of the group they are in.

Any phenomenon like this in the real world comes entangled with a bunch of other, complicating phenomenon. When we see prejudice in the everyday world it is hard to separate out psychological biases from the effects of history, culture and even pragmatism (sometimes people from other groups really are out to get you).

As a social psychologist, Tajfel was interested in the essential conditions of group prejudice. He wanted to know what it took to turn the average fair-minded human into their prejudiced cousin.

He wanted to create a microscope for looking at how we think when we’re part of a group, even when that group has none of the history, culture or practical importance that groups normally do. To look at this, he devised what has become known as the “minimal group paradigm

The minimal group paradigm works like this: participants in the experiment are divided into groups on some arbitrary basis. Maybe eye-colour, maybe what kind of paintings they like, or even by tossing a coin. It doesn’t matter what the basis for group membership is, as long as everyone gets a group and knows what it is. After being told they are in a group, participants are divided up so that they are alone when they make a series of choices about how rewards will be shared among other people in the groups. From this point on, group membership is entirely abstract. Nobody else can be seen, and other group members are referred to by an anonymous number. Participants make choices such as “Member Number 74 (group A) to get 10 points and Member 44 (group B) to get 8 points”, versus “Member Number 74 (group A) to get 2 points and Member 44 (group B) to get 6 points”, where the numbers are points which translate into real money.

You won’t be surprised to learn that participants show favouritism towards their own group when dividing the money. People in group A were more likely to choose the first option I gave above, rather than the second. What is more surprising is that people show some of this group favouritism even when it ends up costing them points – so people in group B sometimes choose the second option, or options like it, even though it provides fewer points than the first option. People tend to opt for the maximum total reward (as you’d expect from the fair-minded citizen), but they also show a tendency to maximise the difference between the groups (what you’d expect from the prejudiced cousin).

The effect may be small, but this is a situation where the groups have been plucked out of the air by the experimenters. Every participant knows which group he or she is in, but they also know that they weren’t in this group before they started the experiment, that their assignment was arbitrary or completely random, and that the groups aren’t going to exist in any meaningful way after the experiment. They also know that their choices won’t directly affect them (they are explicitly told that they won’t be given any choices to make about themselves). Even so, this situation is enough to evoke favouritism.

So, it seems we’ll take the most minimal of signs as a cue to treat people differently according to which group they are in. Tajfel’s work suggests that in-group bias is as fundamental to thinking as the act of categorisations itself. If we want to contribute to a fairer world we need to be perpetually on guard to avoid letting this instinct run away with itself.

Why money won’t buy you happiness

Here’s my column for BBC Future from last week. It was originally titled ‘Why money can’t buy you happiness‘, but I’ve just realised that it would be more appropriately titled if I used a “won’t” rather than a “can’t”. There’s a saying that people who think money can’t buy happiness don’t know where to shop. This column says, more or less, that knowing where to shop isn’t the problem, its shopping itself.

Hope a lottery win will make you happy forever? Think again, evidence suggests a big payout won’t make that much of a difference. Tom Stafford explains why.

 

Think a lottery win would make you happy forever? Many of us do, including a US shopkeeper who just scooped $338 million in the Powerball lottery – the fourth largest prize in the game’s history. Before the last Powerball jackpot in the United States, tickets were being snapped up at a rate of around 130,000 a minute. But before you place all your hopes and dreams on another ticket, here’s something you should know. All the evidence suggests a big payout won’t make that much of a difference in the end.

Winning the lottery isn’t a ticket to true happiness, however enticing it might be to imagine never working again and being able to afford anything you want. One study famously found that people who had big wins on the lottery ended up no happier than those who had bought tickets but didn’t win. It seems that as long as you can afford to avoid the basic miseries of life, having loads of spare cash doesn’t make you very much happier than having very little.

One way of accounting for this is to assume that lottery winners get used to their new level of wealth, and simply adjust back to a baseline level of happiness –something called the “hedonic treadmill”. Another explanation is that our happiness depends on how we feel relative to our peers. If you win the lottery you may feel richer than your neighbours, and think that moving to a mansion in a new neighbourhood would make you happy, but then you look out of the window and realise that all your new friends live in bigger mansions.

Both of these phenomena undoubtedly play a role, but the deeper mystery is why we’re so bad at knowing what will give us true satisfaction in the first place. You might think we should be able to predict this, even if it isn’t straightforward. Lottery winners could take account of hedonic treadmill and social comparison effects when they spend their money. So, why don’t they, in short, spend their winnings in ways that buy happiness?

Picking up points

Part of the problem is that happiness isn’t a quality like height, weight or income that can be easily measured and given a number (whatever psychologists try and pretend). Happiness is a complex, nebulous state that is fed by transient simple pleasures, as well as the more sustained rewards of activities that only make sense from a perspective of years or decades. So, perhaps it isn’t surprising that we sometimes have trouble acting in a way that will bring us the most happiness. Imperfect memories and imaginations mean that our moment-to-moment choices don’t always reflect our long-term interests.

It even seems like the very act of trying to measuring it can distract us from what might make us most happy. An important study by Christopher Hsee of the Chicago School of Business and colleagues showed how this could happen.

Hsee’s study was based around a simple choice: participants were offered the option of working at a 6-minute task for a gallon of vanilla ice cream reward, or a 7-minute task for a gallon of pistachio ice cream. Under normal conditions, less than 30% of people chose the 7-minute task, mainly because they liked pistachio ice cream more than vanilla. For happiness scholars, this isn’t hard to interpret –those who preferred pistachio ice cream had enough motivation to choose the longer task. But the experiment had a vital extra comparison. Another group of participants were offered the same choice, but with an intervening points system: the choice was between working for 6 minutes to earn 60 points, or 7 minutes to earn 100 points. With 50-99 points, participants were told they could receive a gallon of vanilla ice cream. For 100 points they could receive a gallon of pistachio ice cream. Although the actions and the effects are the same, introducing the points system dramatically affected the choices people made. Now, the majority chose the longer task and earn the 100 points, which they could spend on the pistachio reward – even though the same proportion (about 70%) still said they preferred vanilla.

Based on this, and other experiments [5], Hsee concluded that participants are maximising their points at the expense of maximising their happiness. The points are just a medium – something that allows us to get the thing that will create enjoyment. But because the points are so easy to measure and compare – 100 is obviously much more than 60 – this overshadows our knowledge of what kind of ice cream we enjoy most.

So next time you are buying a lottery ticket because of the amount it is paying out, or choosing wine by looking at the price, or comparing jobs by looking at the salaries, you might do well to remember to think hard about how much the bet, wine, or job will really promote your happiness, rather than simply relying on the numbers to do the comparison. Money doesn’t buy you happiness, and part of the reason for that might be that money itself distracts us from what we really enjoy.

 

When your actions contradict your beliefs

Last week’s BBC Future column. The original is here. Classic research, digested!

If at first you don’t succeed, lower your standards. And if you find yourself acting out of line with your beliefs, change them. This sounds like motivational advice from one of the more cynical self-help books, or perhaps a Groucho Marx line (“Those are my principles, and if you don’t like them… well, I have others…”), but in fact it is a caricature of one of the most famous theories in social psychology.

Leon Festinger’s Dissonance Theory is an account of how our beliefs rub up against each other, an attempt at a sort of ecology of mind. Dissonance Theory offers an explanation of topics as diverse as why oil company executives might not believe in climate change, why army units have brutal initiation ceremonies, and why famous books might actually be boring.

The classic study on dissonance theory was published by Festinger and James Carlsmith in 1959. You can find a copy thanks to the Classics in the History of Psychology archive. I really recommend reading the full thing. Not only is it short, but it is full of enjoyable asides. Back in the day psychology research was a lot more fun to write up.

Festinger and Carlsmith were interested in testing what happened when people acted out of line with their beliefs. To do this, they made their participants spend an hour doing two excruciatingly boring tasks. The first task was filling a tray with spools, emptying it, then filling it again (and so on). The second was turning 48 small pegs a quarter-turn clockwise; and then once that was finished, going back to the beginning and doing another quarter-turn for each peg (and so on). Only after this tedium, and at the point which the participants believed the experiment was over, did the real study get going. The experimenter said that they needed someone to fill in at the last minute and explain the tasks to the next subject. Would they mind? And also, could they make the points that “It was very enjoyable”, “I had a lot of fun”, “I enjoyed myself”, “It was very interesting”, “It was intriguing”, and “It was exciting”?

Of course the “experiment” was none of these things. But, being good people, with some pleading if necessary, they all agreed to explain the experiment to the next participant and make these points. The next participant was, of course, a confederate of the experimenter. We’re not told much about her, except that she was an undergraduate specifically hired for the role. The fact that all 71 participants in the experiment were male, and, that one of the 71 had to be excluded from the final analysis because he demanded her phone number so he could explain things further, suggests that Festinger and Carlsmith weren’t above ensuring that there were some extra motivational factors in the mix.

Money talks

For their trouble, the participants were paid $1, $20, or nothing. After explaining the task the original participants answered some questions about how they really felt about the experiment. At the time, many psychologists would have predicted that the group paid the most would be affected the most – if our feelings are shaped by rewards, the people paid $20 should be the ones who said they enjoyed it the most.

In fact, people paid $20 tended to feel the same about the experiment as the people paid nothing. But something strange happened with the people paid $1. These participants were more likely to say they really did find the experiment enjoyable. They judged the experiment as more important scientifically, and had the highest desire to participate in future similar experiments. Which is weird, since nobody should really want to spend another hour doing mundane, repetitive tasks.

Festinger’s Dissonance theory explains the result. The “Dissonance” is between the actions of the participants and their beliefs about themselves. Here they are, nice guys, lying to an innocent woman. Admittedly there are lots of other social forces at work – obligation, authority, even attraction. Festinger’s interpretation is that these things may play a role in how the participants act, but they can’t be explicitly relied upon as reasons for acting. So there is a tension between their belief that they are a nice person and the knowledge of how they acted. This is where the cash payment comes in. People paid $20 have an easy rationalisation to hand. “Sure, I lied”, they can say to themselves, “but I did it for $20”. The men who got paid the smaller amount, $1, can’t do this. Giving the money as a reason would make them look cheap, as well as mean. Instead, the story goes, they adjust their beliefs to be in line with how they acted. “Sure, the experiment was kind of interesting, just like I told that girl”, “It was fun, I wouldn’t mind being in her position” and so on.

So this is cognitive dissonance at work. Normally it should be a totally healthy process – after all, who could object to people being motivated to reduce contradictions in their beliefs (philosophers even make a profession of out this), but in circumstances where some of our actions or our beliefs exist for reasons which are too complex, too shameful, or too nebulous to articulate, it can lead to us changing perfectly valid beliefs, such as how boring and pointless a task was.

Fans of cognitive dissonance will tell you that this is why people forced to defend a particular position – say because it is their job – are likely to end up believing it. It can also suggest a reason for why military services, high school sports teams and college societies have bizarre and punishing initiation rituals. If you’ve been through the ritual, dissonance theory predicts, you’re much more likely to believe the group is a valuable one to be a part of (the initiation hurt, and you’re not a fool, so it must have been worth it right?).

For me, I think dissonance theory explains why some really long books have such good reputations, despite the fact that they may be as repetitive and pointless as Festinger’s peg task. Get to the end of a three-volume, several thousand page, conceptual novel and you’re faced with a choice: either you wasted your time and money, and you feel a bit of a fool; or the novel is brilliant and you are an insightful consumer of literature. Dissonance theory pushes you towards the latter interpretation, and so swells the crowd of people praising a novel that would be panned if it was 150 pages long.

Changing your beliefs to be in line with how you acted may not be the most principled approach. But it is certainly easier than changing how you acted.

The essence of intelligence is feedback

Here’s last week’s BBC Future column. The original is here, where it was called “Why our brains love feedback”. I  was inspired to write it by a meeting with artist Tim Lewis, which happened as part of a project I’m involved with : Furnace Park, which is seeing a piece of reclaimed land in an old industrial area of Sheffield transformed into a public space by the University.

A meeting with an artist gets Tom Stafford thinking about the essence of intelligence. Our ability to grasp, process and respond to information about the world allows us follow a purpose. In some ways, it’s what makes us, us.

In Tim Lewis’s world, bizarre kinetic sculptures move, flap wings, draw and even walk around. The British artist creates mechanical animals and animal machines – like Pony, a robotic ostrich with an arm for a neck and a poised hand for a head – that creak into life in a way that can seem unsettling, as if they have a strange, if awkward, life of their own. His latest creations are able to respond to the environment, and it makes me ponder the essence of intelligence – in some ways revealing what makes us, us.
I met Tim on a cold Friday afternoon to talk about his work, and while talking about the cogs and gears he uses to make his artwork move, he made a remark that made me stop in my tracks. The funny thing is, he said, all of the technology existed to make machines like this in the sixteenth century – the thing that stopped them wasn’t the technical know-how, it was because they lacked the right model of the mind.

p015lq0qJetsam 2012, by Tim Lewis (Courtesy: Tim Lewis)

What model of the mind do you need to create a device like Tim’s Jetsam, a large wire mesh Kiwi-like creature that forages around its cage for pieces of a nest to build. The intelligence in this creation isn’t in the precision of the craftwork (although it is precise), or in the faithfulness to the kind of movements seen in nature (although it is faithful). The intelligence is in how it responds to the placing of the sticks. It isn’t programmed in advance, it identifies where each piece is and where it needs to go.

This gives Jetsam the hallmark of intelligence – flexibility. If the environment changes, say when the sticks are re-scattered at random, it can still adapt and find the materials to build its nest. Rather than a brain giving instructions such as “Do this”, feedback allows instructions such as “If this, do that; if that, do the other”. Crucially, feedback allows a machine to follow a purpose – if the goal changes, the machine can adapt.

It’s this quality that the sixteenth century clockwork models lacked, and one that we as humans almost take for granted. We grasp and process information about the world in many forms, including sights, smells or sounds. We may give these information sources different names, but in some sense, these are essentially the same stuff.

Information control

Cybernetics is the name given to the study of feedback, and systems that use feedback, in all their forms. The term comes from the Greek word for “to steer”, and inspiration for some of the early work on cybernetics sprang from automatic guiding systems developed during World War II for guns or radar antennae. Around the middle of the twentieth century cybernetics became an intellectual movement across many different disciplines. It created a common language that allowed engineers to talk with psychologists, or ecologists to talk to mathematicians, about living organisms from the viewpoint of information control systems.

A key message of cybernetics is that you can’t control something unless you have feedback – and that means measurement of the outcomes. You can’t hit a moving target unless you get feedback on changes to its movement, just as you can’t tell if a drug is a cure unless you get feedback on how many more people recover when they are given it. The flip side of this dictum is the promise that with feedback, you can control anything. The human brain seems to be the arch embodiment of this cybernetic principle. With the right feedback, individuals have been known to control things as unlikely as their own heart rate, or learn to shrink and expand their pupils at will. It even seems possible to control the firing of individual brain cells.

But enhanced feedback methods can accelerate learning about more mundane behaviours. For example, if you are learning to take basketball shots, augmented feedback in the form of “You were 3 inches off to the left” can help you learn faster and reach a higher skill level quicker. Perhaps the most powerful example of an augmented feedback loop is the development of writing, which allowed us to take language and experiences, and make them permanent, solidifying it against the ravages of time, space and memory.

Thanks to feedback we can become more than simple programs with simple reflexes, and develop more complex responses to the environment. Feedback allows animals like us to follow a purpose. Tim Lewis’s mechanical bird might seem simple, but in terms of intelligence it has more in common with us than with nearly all other machines that humans have built. Engines or clocks might be incredibly sophisticated, but until they are able to gather their own data about the environment they remain trapped in fixed patterns.

Feedback loops, on the other hand, beginning with the senses but extending out across time and many individuals, allow us to self-construct, letting us travel to places we don’t have the instructions for beforehand, and letting us build on the history of our actions. In this way humanity pulls itself up by its own bootstraps.

The Master and His Emissary

I’ve been struggling to understand Iain McGilchrist’s argument about the two hemispheres of the brain, as presented in his book “The Master and His Emissary” [1]. It’s an argument that takes you from neuroanatomy, through behavioural science to cultural studies [2]. The book is crammed with fascinating evidential trees, but I left it without a clear understanding of the overall wood. Watching this RSA Animate helped.

Basically, I think McGilchrist is attempting a neuroscientific rehabilitation of an essentially mystical idea: the map is not the territory, of the important of ends rather than just means [3]. Here’s a tabulation of functions and areas of focus that McGilchrist claims for the two hemispheres:

Left Right
Representation Perception
The Abstract The Concrete
Narrow focus Broad focus
Language Embodiment
Manipulation Experience (?)
Parts Wholes
Machines Life
The Static The Changing
Focus on the known Alertness for the novel
Consistency, familiarity, prediction Contradiction, novelty, surprise
A closed knowledge system An open knowledge system
(Urge after) Consistency (Urge after) Completeness
The Known The Unknown, The ineffable
The explicit The implicit
Generalisation Individuality/uniqueness
Particulars Context

A key idea – which is in the RSA Animate – is the idea of a ‘necessary distance’ from the world. By experiencing yourself as separate (but not totally detached) you are able to empathise with people, manipulate tools, reason on symbols etc. But, of course, there’s always the risk that you end up valuing the tools for their own sake, or believing in the symbol system you have created to understand the world.

From a cognitive neuroscience point of view, this is fair enough, by which I mean that if you are going to look into the (vast) literature on hemispheric specialisation and make some summary claims, as McGilchrist does, then these sort of claims are reasonable. You can enjoy one of the grand-daddies of split brain studies, Michael Gazzaniga, summarise his perspective, which isn’t that discordant, here [4].

From this foundation, McGilchrist goes on to diagnose a historical movement in our culture away from a balanced way of thinking and towards a ‘left brain’ dominated way of thinking. This, to me, also seems fair enough. Modernity does seem characterised by the ascendance of both instrumentalism and bureaucracy, both ‘leftish’ values in the McGilchristian framework.

It is worth noting that dual-systems theories, of which this is one, are perennially popular. McGilchrist is careful and explicit in rejecting the popular Reason vs Emotion distinction that has come to be associated with the two hemispheres. In this RSA report Divided Brain, Divided World, he briefly discusses how his theory relates to the automatic-deliberative distinction, as (for example) set out by Daniel Kahneman in his Thinking Fast and Slow. He says, briefly, that that distinction is orthogonal to the one he’s making; i.e. both hemispheres do automatic and controlled processing.

I was turned on to the book by Helen Mort, who writes a great blog about neuroscience and poetry which you can check out here: poetryonthebrain.blogspot.ca/. If you’re interested in reading more about psychology, divided selves and cultural shifts I recommend Timothy Wilson’s “Strangers to Ourselves” and Walter Ong’s “Orality and Literacy”.

Footnotes

[1] If you buy the paperback they’ve slimmed it down, at least in some editions, by leaving out the reference list at the end. Very frustrating.

[2] Fans of grand theories of hemispheric functioning and the relation to cultural evolution, make sure you check out Julian Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind . Weirdly McGilchrist hardly references this book (noting merely that he is saying something completely different).

[3] And when I use the term ‘mystical’, that is a good thing, not a denigration.

[4] Gazzaniga, M. (2002). The split brain revisited. Scientific American, Special Editions: The Hidden Mind.

BBC Column: Why cyclists enrage car drivers

Here is my latest BBC Future column. The original is here. This one proved to be more than usually controversial, not least because of some poorly chosen phrasing from yours truly. This is an updated version which makes what I’m trying to say clearer. If you think that I hate cyclists, or my argument relies on the facts of actual law breaking (by cyclists or drivers), or that I am making a claim about the way the world ought to be (rather than how people see it), then please check out this clarification I published on my personal blog after a few days of feedback from the column. One thing the experience has convinced me of is that cycling is a very emotional issue, and one people often interpret in very moral terms.

It’s not simply because they are annoying, argues Tom Stafford, it’s because they trigger a deep-seated rage within us by breaking the moral order of the road.

 

Something about cyclists seems to provoke fury in other road users. If you doubt this, try a search for the word “cyclist” on Twitter. As I write this one of the latest tweets is this: “Had enough of cyclists today! Just wanna ram them with my car.” This kind of sentiment would get people locked up if directed against an ethnic minority or religion, but it seems to be fair game, in many people’s minds, when directed against cyclists. Why all the rage?

I’ve got a theory, of course. It’s not because cyclists are annoying. It isn’t even because we have a selective memory for that one stand-out annoying cyclist over the hundreds of boring, non-annoying ones (although that probably is a factor). No, my theory is that motorists hate cyclists because they offend the moral order.

Driving is a very moral activity – there are rules of the road, both legal and informal, and there are good and bad drivers. The whole intricate dance of the rush-hour junction only works because everybody knows the rules and follows them: keeping in lane; indicating properly; first her turn, now mine, now yours. Then along come cyclists, innocently following what they see as the rules of the road, but doing things that drivers aren’t allowed to: overtaking queues of cars, moving at well below the speed limit or undertaking on the inside.

You could argue that driving is like so much of social life, it’s a game of coordination where we have to rely on each other to do the right thing. And like all games, there’s an incentive to cheat. If everyone else is taking their turn, you can jump the queue. If everyone else is paying their taxes you can dodge them, and you’ll still get all the benefits of roads and police.

In economics and evolution this is known as the “free rider problem”; if you create a common benefit  – like taxes or orderly roads – what’s to stop some people reaping the benefit without paying their dues? The free rider problem creates a paradox for those who study evolution, because in a world of selfish genes it appears to make cooperation unlikely. Even if a bunch of selfish individuals (or genes) recognise the benefit of coming together to co-operate with each other, once the collective good has been created it is rational, in a sense, for everyone to start trying to freeload off the collective. This makes any cooperation prone to collapse. In small societies you can rely on cooperating with your friends, or kin, but as a society grows the problem of free-riding looms larger and larger.

Social collapse

Humans seem to have evolved one way of enforcing order onto potentially chaotic social arrangements. This is known as “altruistic punishment”, a term used by Ernst Fehr and Simon Gachter in a landmark paper published in 2002 [4]. An altruistic punishment is a punishment that costs you as an individual, but doesn’t bring any direct benefit. As an example, imagine I’m at a football match and I see someone climb in without buying a ticket. I could sit and enjoy the game (at no cost to myself), or I could try to find security to have the guy thrown out (at the cost of missing some of the game). That would be altruistic punishment.

Altruistic punishment, Fehr and Gachter reasoned, might just be the spark that makes groups of unrelated strangers co-operate. To test this they created a co-operation game played by constantly shifting groups of volunteers, who never meet – they played the game from a computer in a private booth. The volunteers played for real money, which they knew they would take away at the end of the experiment. On each round of the game each player received 20 credits, and could choose to contribute up to this amount to a group project. After everyone had chipped in (or not), everybody (regardless of investment) got 40% of the collective pot.

Under the rules of the game, the best collective outcome would be if everyone put in all their credits, and then each player would get back more than they put in. But the best outcome for each individual was to free ride – to keep their original 20 credits, and also get the 40% of what everybody else put in. Of course, if everybody did this then that would be 40% of nothing.

In this scenario what happened looked like a textbook case of the kind of social collapse the free rider problem warns of. On each successive turn of the game, the average amount contributed by players went down and down. Everybody realised that they could get the benefit of the collective pot without the cost of contributing. Even those who started out contributing a large proportion of their credits soon found out that not everybody else was doing the same. And once you see this it’s easy to stop chipping in yourself – nobody wants to be the sucker.

Rage against the machine

A simple addition to the rules reversed this collapse of co-operation, and that was the introduction of altruistic punishment. Fehr and Gachter allowed players to fine other players credits, at a cost to themselves. This is true altruistic punishment because the groups change after each round, and the players are anonymous. There may have been no direct benefit to fining other players, but players fined often and they fined hard – and, as you’d expect, they chose to fine other players who hadn’t chipped in on that round. The effect on cooperation was electric. With altruistic punishment, the average amount each player contributed rose and rose, instead of declining. The fine system allowed cooperation between groups of strangers who wouldn’t meet again, overcoming the challenge of the free rider problem.

How does this relate to why motorists hate cyclists? The key is in a detail from that classic 2002 paper. Did the players in this game sit there calmly calculating the odds, running game theory scenarios in their heads and reasoning about cost/benefit ratios? No, that wasn’t the immediate reason people fined players. They dished out fines because they were mad as hell. Fehr and Gachter, like the good behavioural experimenters they are, made sure to measure exactly how mad that was, by asking players to rate their anger on a scale of one to seven in reaction to various scenarios. When players were confronted with a free-rider, almost everyone put themselves at the upper end of the anger scale. Fehr and Gachter describe these emotions as a “proximate mechanism”. This means that evolution has built into the human mind a hatred of free-riders and cheaters, which activates anger when we confront people acting like this – and it is this anger which prompts altruistic punishment. In this way, the emotion is evolution’s way of getting us to overcome our short-term self-interest and encourage collective social life.

So now we can see why there is an evolutionary pressure pushing motorists towards hatred of cyclists. Deep within the human psyche, fostered there because it helps us co-ordinate with strangers and so build the global society that is a hallmark of our species, is an anger at people who break the rules, who take the benefits without contributing to the cost. And cyclists trigger this anger when they use the roads but don’t follow the same rules as cars.

Now cyclists reading this might think “but the rules aren’t made for us – we’re more vulnerable, discriminated against, we shouldn’t have to follow the rules.” Perhaps true, but irrelevant when other road-users see you breaking rules they have to keep. Maybe the solution is to educate drivers that cyclists are playing an important role in a wider game of reducing traffic and pollution. Or maybe we should just all take it out on a more important class of free-riders, the tax-dodgers.

BBC Column: The psychology of the to-do list

My latest column for BBC Future. The original is here.

Your mind loves it when a plan comes together – the mere act of planning how to do something frees us from the burden of unfinished tasks.

If your daily schedule and email inbox are anything like mine, you’re often left a state of paralysis by the sheer bulk of outstanding tasks weighing on your mind. In this respect, David Allen’s book Getting Things Done is a phenomenon. An international best-seller and a personal productivity system known merely as GTD, it’s been hailed as being a “new cult for the info age”. The heart of the system is a way of organising the things you have to do, based on Allen’s experience of working with busy people and helping them to make time for the stuff they really want to do.

Ten years after the book was first published in 2001, scientific research caught up with the productivity guru, and it revealed exactly why his system is so popular – and so effective.

The key principle behind GTD is writing down everything that you need to remember, and filing it effectively. This seemingly simple point is based around far more than a simple filing cabinet and a to-do list. Allen’s system is like a to-do list in the same way a kitten is like a Bengal Tiger.

“Filing effectively”, in Allen’s sense, means a system with three parts: an archive, where you store stuff you might need one day (and can forget until then), a current task list in which everything is stored as an action, and a “tickler file” of 43 folders in which you organise reminders of things to do (43 folders because that’s one for the next thirty-one days plus the next 12 months).

The current task list is a special kind of to-do list because all the tasks are defined by the next action you need to take to progress them. This simple idea is remarkably effective in helping resolving the kind of inertia that stops us resolving items on our lists. As an example, try picking a stubborn item from your own to-do list and redefining it until it becomes something that actually involves moving one of your limbs. Something necessary but unexciting like “Organise a new fence for the garden” becomes “ring Marcus and ask who fixed his fence”. Or, even better with further specifics on how to move your fingers, “dial 2 626 81 19 and ask Marcus who fixed his fence”.

Breaking each task down into its individual actions allows you to convert your work into things you can either physically do, or forget about, happy in the knowledge that it is in the system. Each day you pick up the folder for that day and either action the item, or defer it to another folder for a future day or month. Allen is fanatical on this – he wants people to make a complete system for self-management, something that will do the remembering and monitoring for you, so your mind is freed up.

So what’s the psychology that backs this up? Roy Baumeister and EJ Masicampo at Florida State University were interested in an old phenomenon called the Zeigarnik Effect, which is what psychologists call our mind’s tendency to get fixated on unfinished tasks and forget those we’ve completed. You can see the effect in action in a restaurant or bar – you can easily remember a drinks order, but then instantly forget it as soon as you’ve put the drinks down. I’ve mentioned this effect before when it comes to explaining the psychology behind Tetris.

A typical way to test for the Zeigarnik Effect is to measure if an unfulfilled goal interferes with the ability to carry out a subsequent task. Baumeister and Masicampo discovered that people did worse on a brainstorming task when they were prevented from finishing a simple warm-up task – because the warm-up task was stuck in their active memory. What Baumeister and Masicampo did next is the interesting thing; they allowed some people to make plans to finish the warm-up task. They weren’t allowed to finish it, just to make plans on how they’d finish it. Sure enough, those people allowed to make plans were freed from the distracting effect of leaving the warm-up task unfinished.

Back to the GTD system, its key insight is that your attention has a limited capacity – you can only fit so much in your mind at any one time. The GTD archive and reminder system acts as a plan for how you’ll do things, releasing the part of your attention that it struggling to hold each item on your to-do list in mind. Rather than remove things from our sight by doing them, Allen, and the research, suggest we merely need to have a good plan of when and how to do them. The mere act of planning how to finish something satisfies the itch that keeps uncompleted tasks in our memory.

BBC Column: Are we naturally good or bad?

My BBC Future column from last week. The original is here. I started out trying to write about research using economic games with apes and monkeys but I got so bogged down in the literature I switched to this neat experiment instead. Ed Yong is a better man than me and wrote a brilliant piece about that research, which you can find here.

It’s a question humanity has repeatedly asked itself, and one way to find out is to take a closer look at the behaviour of babies.… and use puppets.

Fundamentally speaking, are humans good or bad? It’s a question that has repeatedly been asked throughout humanity. For thousands of years, philosophers have debated whether we have a basically good nature that is corrupted by society, or a basically bad nature that is kept in check by society. Psychology has uncovered some evidence which might give the old debate a twist.

One way of asking about our most fundamental characteristics is to look at babies. Babies’ minds are a wonderful showcase for human nature. Babies are humans with the absolute minimum of cultural influence – they don’t have many friends, have never been to school and haven’t read any books. They can’t even control their own bowels, let alone speak the language, so their minds are as close to innocent as a human mind can get.

The only problem is that the lack of language makes it tricky to gauge their opinions. Normally we ask people to take part in experiments, giving them instructions or asking them to answer questions, both of which require language. Babies may be cuter to work with, but they are not known for their obedience. What’s a curious psychologist to do?

Fortunately, you don’t necessarily have to speak to reveal your opinions. Babies will reach for things they want or like, and they will tend to look longer at things that surprise them. Ingenious experiments carried out at Yale University in the US used these measures to look at babies’ minds. Their results suggest that even the youngest humans have a sense of right and wrong, and, furthermore, an instinct to prefer good over evil.

How could the experiments tell this? Imagine you are a baby. Since you have a short attention span, the experiment will be shorter and loads more fun than most psychology experiments. It was basically a kind of puppet show; the stage a scene featuring a bright green hill, and the puppets were cut-out shapes with stick on wobbly eyes; a triangle, a square and a circle, each in their own bright colours. What happened next was a short play, as one of the shapes tried to climb the hill, struggling up and falling back down again. Next, the other two shapes got involved, with either one helping the climber up the hill, by pushing up from behind, or the other hindering the climber, by pushing back from above.

Already something amazing, psychologically, is going on here. All humans are able to interpret the events in the play in terms of the story I’ve described. The puppets are just shapes. They don’t make human sounds or display human emotions. They just move about, and yet everyone reads these movements as purposeful, and revealing of their characters. You can argue that this “mind reading”, even in infants, shows that it is part of our human nature to believe in other minds.

Great expectations

What happened next tells us even more about human nature. After the show, infants were given the choice of reaching for either the helping or the hindering shape, and it turned out they were much more likely to reach for the helper. This can be explained if they are reading the events of the show in terms of motivations – the shapes aren’t just moving at random, but they showed to the infant that the shape pushing uphill “wants” to help out (and so is nice) and the shape pushing downhill “wants” to cause problems (and so is nasty).

The researchers used an encore to confirm these results. Infants saw a second scene in which the climber shape made a choice to move towards either the helper shape or the hinderer shape. The time infants spent looking in each of the two cases revealed what they thought of the outcome. If the climber moved towards the hinderer the infants looked significantly longer than if the climber moved towards the helper. This makes sense if the infants were surprised when the climber approached the hinderer. Moving towards the helper shape would be the happy ending, and obviously it was what the infant expected. If the climber moved towards the hinderer it was a surprise, as much as you or I would be surprised if we saw someone give a hug to a man who had just knocked him over.

The way to make sense of this result is if infants, with their pre-cultural brains had expectations about how people should act. Not only do they interpret the movement of the shapes as resulting from motivations, but they prefer helping motivations over hindering ones.

This doesn’t settle the debate over human nature. A cynic would say that it just shows that infants are self-interested and expect others to be the same way. At a minimum though, it shows that tightly bound into the nature of our developing minds is the ability to make sense of the world in terms of motivations, and a basic instinct to prefer friendly intentions over malicious ones. It is on this foundation that adult morality is built.

BBC Column: when you want what you don’t like

My BBC Future column from Tuesday. The original is here. It’s a Christmas theme folks, but hopefully I cover an interesting research area too: Berridge, Robinson and colleagues’ work on the wanting/liking distinction.

As the holiday season approaches, Tom Stafford looks at festive overindulgence, and explains how our minds tell us we want something even if we may not like it.

Ah, Christmas, the season of peace, goodwill and overindulgence. If this year is like others, I’ll probably be taking up residence on the couch after a big lunch, continuing to munch my way through packets of unhealthy snacks, and promising myself that I’ll live a more virtuous life once the New Year begins.

It was on one such occasion that I had an epiphany in the psychology of everyday life. I’d just finished the last crisp of a large packet, and the thought occurred to me that I don’t actually like crisps that much. But there I was, covered in crumbs and post-binge guilt, saturated fats coursing through my body looking for nice arteries to settle down on. In an effort to distract myself from the urge to reach for another packet, I started to think about the peculiar psychology of the situation.

Every bite seemed essential, but in a way that seem to suggest I was craving them rather than liking them. Fortunately for my confusion (and my arteries), there’s some solid neuroscience to explain how we can want something we don’t like.

Normally wanting and liking are tightly bound together. We want things we like and we like the things we want. But experiments by the University of Michigan’s Kent Berridge and colleagues show that this isn’t always the case. Wanting and liking are based on separate brain circuits and can be controlled independently.

To demonstrate this, Berridge used a method called “taste reactivity“, in effect, recording the faces pulled when animals are given different kinds of food. Give an adult human something sweet and they’ll lick their lips. This might sound obvious, but when you take it to the next level in terms of detail and rigour you start to get a powerful system for telling how much an animal likes a particular type of food. Taste reactivity involves defining the reactions precisely – for example, lip-licking would be defined as “a mild rhythmic smacking, slight protrusions of the tongue, a relaxed expression accompanied sometimes by a slight upturn of the corners of the mouth” – and then looking for this same expression in other species. A baby human can’t tell you they like the taste like an adult can, but you can see the same expression. A chimpanzee will do the same with a sweet taste. A rat won’t do exactly the same thing, but they do something similar. By carefully observing and coding the facial expressions that accompany nice and nasty tastes, you can tell what an animal is enjoying and what they aren’t.

Pleasure principles

 

This method is a breakthrough because it gives us another way of looking at how non-human species feel about things. Most animal psychology uses overt actions – things like pressing levers – as measures. So, for example, if you want to see how a reward affects a rat, you put it in a box with a lever and give it food each time it presses the level. Sure enough, the rat will learn to press the lever once it learns that this produces food. Taste reactivity creates an additional measure, allowing us insight into how much the animal enjoys the food, as well as what it makes it want to do.

From this, the neuroscientists have been able to show that wanting and liking are governed by separate circuits in the brain. The liking system is based in the subcortex, that part of our brain that is most similar to other species. Electrical stimulation here, in an area called the nucleus accumbans, is enough to cause pleasure. Sadly, you need brain surgery and implanted electrodes to experience this. But another way you can stimulate this bit of the brain is via the opioid chemical system, which is the brain messenger system directly affected by drugs like heroin. Like brain surgery, this is also NOT recommended.

Wanting happens in nearby, but distinct, circuits. These are more widely spread around the subcortex than the liking circuits, and use a different chemical messenger system, one based around a neurotransmitter called dopamine. Surprisingly, it is this circuit rather than the one for liking which seems to play a primary role in addiction. For addicts a key aspect of their condition is the way in which people, situations and things associated with drug taking become reminders of the drug that are impossible to ignore. Berridge has hypothesised that this is due to a drug’s direct effects on the wanting system. For addicts any reminder of drug taking triggers a neural cascade, which culminates in feelings of desire, but crucially, without needing any actual enjoyment of the drug to occur.

The reason wanting and liking circuits are so near each other is that they normally work closely together, ensuring you want what you like. But in addiction, the theory goes, the circuits can become uncoupled, so that you get extreme wanting without a corresponding increase in pleasure. Matching this, addicts are notable for enjoying the thing they are addicted to less than non-addicts. This is the opposite of most activities, where people who do the most are also the ones who enjoy it the most. (Most activities except another Christmas tradition, watching television, where you see the same pattern as with drug addictions – people who watch the most enjoy it the least).

So now you know what do when you find yourself chomping your way through yet another packet of crisps over the holiday period. Watch your face and see if you are licking your lips. If you are, perhaps your liking circuits are fully engaged and you’ll be happy with what you’ve eaten when you’re finished. If there’s no lip-licking then perhaps your wanting circuits are in control and you need to exercise some self-restraint. Perhaps after the next mouthful, though.

BBC Column: political genes

Here’s my BBC Future column from last week. The original is here. The story here isn’t just about politics, although that’s an important example of capture by genetic reductionists. The real moral is about how the things that we measure are built into our brains by evolution: usually they aren’t written in directly, but as emergent outcomes..

There’s growing evidence to suggest that our political views can be inherited. But before we decide to ditch the ballot box for a DNA test, Tom Stafford explains why knowing our genes doesn’t automatically reveal how our minds work.

There are many factors that shape and influence our political views; our upbringing, career, perhaps our friends and partners. But for a few years there’s been growing body of evidence to suggest that there could be a more fundamental factor behind our choices: political views could be influenced by our genes.

The idea that political views have a genetic component is now widely accepted – or at least widely accepted enough to become a field of study with its own name: genopolitics. This began with a pivotal study, which showed that identical twins shared more similar political opinions than fraternal twins. It suggested that political opinion isn’t just influenced by dinner table conversation (which both kinds of twins share), but through parents’ genes (which identical twins have more in common than fraternal twins). The strongest finding from this field is that the position people occupy on a scale from liberal to conservative is heritable. The finding is surprisingly strong, allowing us to use genetic information to predict variations in political opinion on this scale more reliably than we can use genetic information to predict, say, longevity, or alcoholism.

Does this mean we can give up on elections soon, and just have people send in their saliva samples? Not quite, and this highlights a more general issue with regards to seeking genetic roots behind every aspect of our minds and bodies.

Since we first saw the map of the human genome over ten years ago, it might have seemed like we were poised to decode everything about human life. And through military-grade statistics and massive studies of how traits are shared between relatives, biologists are finding more and more genetic markers for our appearance, health and our personalities.

But there’s a problem – there simply isn’t enough information in the human genome to tell us everything. An individual human has only around 20,000 genes, slightly less than wild rice. This means there is about the same amount of information in your DNA as there is in eight tracks on your mp3 player. What forms the rest of your body and behaviour is the result of a complex unfolding of interactions among your genes, the proteins they create, and the environment.

In other words, when we talk about genes predicting political opinion, it doesn’t mean we can find a gene for voting behaviour – nor one for something like dyslexia or any other behaviour, for that matter. Leaving aside the fact that the studies measured “political beliefs” using an extremely simple scale, one that will give people with very different beliefs the same score, let’s focus on what it really means to say that genes can predict scoring on this scale.

Getting emotional

Obviously there isn’t a gene controlling how people answer questions about their political belief. That would be ridiculous, and require us to assume that somewhere, lurking in the genome, was a gene that lay dormant for millions of years until political scientists invented questionnaire studies. Extremely unlikely.

But let’s not stop there. It isn’t really any more plausible to imagine a gene for voting for liberal rather than conservative political candidates. How could such a gene evolve before the invention of democracy? What would it do before voting became a common behaviour?

The limited amount of information in the genome means that it will be rare to talk of “genes for X”, where X is a specific, complex outcome. Yes, some simple traits – like eye colour – are directly controlled by a small number of genes. But most things we’re interested in measuring about everyday life – for instance, political opinions, other personality traits or common health conditions – have no sole genetic cause. The strength of the link between genetics and the liberal-conservative scale suggests that something more fundamental is being influenced by the genes, something that in turn influences political beliefs.

One candidate could be brain systems controlling our emotional responses. For instance, a study showed that American volunteers who started to sweat most when they heard a sudden noise were also more likely to support capital punishment and the Iraq War. This implies that people whose basic emotional responses to threats are more pronounced end up developing a constellation of more right-wing political opinions. Another study, this time in Britain, showed differences in brain structure between liberals and conservatives – with the amygdala, a part of the brain that learns emotional responses, being larger in conservatives. Again, this suggests that differences in political beliefs might arise from differences in emotional processes.

But notice that there isn’t any suggestion that the political opinions are directly controlled by biology. Rather, the political opinions are believed to develop differently in people with different basic biology. Something like the size of a particular brain area is influenced by our genes, but the pathway from our DNA to an apparently simple variation in a brain region is one with many twists, turns and opportunities for other genes and accidents of history to intervene on.

So the idea that genes can have some influence on political views shouldn’t be shocking – it would be weird if there wasn’t some form of genetic influence. But rather than being the end of the story, it just deepens the mystery of how our biology and our ideas interact.

Where is your mind?

My BBC Future column from a few days ago. The original is here. I’m donating the fee from this article to Wikipedia. Read the column and it should be obvious why. Perhaps you should too: donate.wikimedia.org.

 

We like to think our intelligence is self-made; it happens inside our heads, the product of our inner thoughts alone. But the rise of Google, Wikipedia and other online tools has made many people question the impact of these technologies on our brains. Is typing in the search term, “Who has played James Bond in the movies?” the same as knowing that the answer is Sean Connery, George Lazenby, Roger Moore, Timothy Dalton, Pierce Brosnan and Daniel Craig (… plus David Niven in Casino Royale)? Can we say we know the answer to this question when what we actually know is how to rapidly access the information?

I’ve written before about whether or not the internet is rewiring our brains, but here the question is about how we seek to define intelligence itself. And the answer appears to be an intriguing one. Because when you look at the evidence from psychological studies, it suggests that much of our intelligence comes from how we coordinate ourselves with other people and our environment.

An influential theory among psychologists is that we’re cognitive misers. This is the idea that we are reluctant to do mental work unless we have to, we try to avoid thinking things though fully when a short cut is available. If you’ve ever voted for the political candidate with the most honest smile, or chosen a restaurant based on how many people are already sitting in there, then you’ve been a cognitive miser. The theory explains why we’d much rather type a zipcode into a sat-nav device or Google Maps than memorise and recall the location of a venue – it’s so much easier to do so.

Research shows that people don’t tend to rely on their memories for things they can easily access. Things like the world in front of our eyes, for example, can be changed quite radically without people noticing. Experiments have shown that buildings can somehow disappear from pictures we’re looking at, or the people we’re talking to can be switched with someone else, and often we won’t notice – a phenomenon called “change blindness”. This isn’t as an example of human stupidity – far from it, in fact – this is an example of mental efficiency. The mind relies on the world as a better record than memory, and usually that’s a good assumption.

As a result, philosophers have suggested that the mind is designed to spread itself out over the environment. So much so that, they suggest, the thinking is really happening in the environment as much as it is happening in our brains. The philosopher Andy Clark called humans “natural born cyborgs“, beings with minds that naturally incorporate new tools, ideas and abilities. From Clark’s perspective, the route to a solution is not the issue – having the right tools really does mean you know the answers, just as much as already knowing the answer.

Society wins

A memory study by Daniel Wegner of Harvard University provides a neat example of this effect. Couples were asked to come into the lab to take a memorisation test. Half the couples were kept together, and half were reassigned to pair up with someone they didn’t know. Both groups then studied a list of words in silence, and were then tested individually. The pairs that were made up of a couple in a relationship could remember more items, both overall and as individuals.

What happened, according to Wegner, was that the couples in a relationship had a good understanding of their partners. Because of this they would tacitly divide up the work between them, so that, say, one partner would remember words to do with technology, assuming the other would remember the words to do with sports. In this way, each partner could concentrate on their strengths, and so individually they outperformed people in couples where no mental division of labour was possible. Just as you rely on a search engine for answers, so you can rely on people you deal with regularly to think about certain things, developing a shared system for committing items to memory and bringing them out again, what Wegner called “transactive memory”.

Having minds that work this way is one of the great strengths of the human species. Rather than being forced to rely on our own resources for everything, we can share our knowledge and so pool our understanding. Technology keeps track of things for individuals so we don’t have to, while large systems of knowledge serve the needs of society as a whole. I don’t know how a computer works, or how to grow broccoli, but that knowledge is out there and I get to benefit. And the internet provides even more potential to share this knowledge. Wikipedia is one of the best examples – an evolving store of the world’s knowledge for which everyone can benefit from. I use Wikipedia every day, aware of all the caveats of doing so, because it supports me in all the thinking I do for things like this column.

So as well as having a physical environment – like the rooms or buildings we live or work in – we also have a mental environment. Which means that when I ask you where your mind is, you shouldn’t point toward the centre of your forehead. As research on areas like transactive memory shows, our minds are made up just as much by the people and tools around us as they are by the brain cells inside our skull.

ENDNOTE: Wikipedia is an unparalleled democratisation of knowledge, a
wonderful sharing of human intelligence that’s free to anyone to view. I’m
donating the fee for this article to help support Wikipedia’s work. If you feel you can help out please follow this link: https://donate.wikimedia.org.

BBC Future Column: Why is it so hard to give good directions?

My BBC Future column from last week. Original here.

Psychologically speaking it is a tricky task, because our minds find it difficult to appreciate how the world looks to someone who doesn’t know it yet.

We’ve all been there – the directions sounded so clear when we were told them. Every step of the journey seemed obvious, we thought we had understood the directions perfectly. And yet here we are miles from anywhere, after dark, in a field arguing about whether we should have gone left or right at the last turn, whether we’re going to have to sleep here now, and exactly whose fault it is.

The truth is we shouldn’t be too hard on ourselves. Psychologically speaking giving good directions is a particularly difficult task.

The reason we find it hard to give good directions is because of the “curse of knowledge”, a psychological quirk whereby, once we have learnt something, we find it hard to appreciate how the world looks to someone who doesn’t know it yet. We don’t just want people to walk a mile in our shoes, we assume they already know the route. Once we know the way to a place we don’t need directions, and descriptions like “its the left about halfway along” or “the one with the little red door” seem to make full and complete sense.

But if you’ve never been to a place before, you need more than a description of a place; you need an exact definition, or a precise formula for finding it. The curse of knowledge is the reason why, when I had to search for a friend’s tent in a field, their advice of “it’s the blue one” seemed perfectly sensible to them and was completely useless for me, as I stood there staring blankly at hundreds of blue tents.

This same quirk is why teaching is so difficult to do well. Once you are familiar with a topic it is very hard to understand what someone who isn’t familiar with it needs to know. The curse of knowledge isn’t a surprising flaw in our mental machinery – really it is just a side effect of our basic alienation from each other. We all have different thoughts and beliefs, and we have no special access to each other’s minds. A lot of the time we can fake understanding by mentally simulating what we’d want in someone else’s position. We have thoughts along the lines of “I’d like it if there was one bagel left in the morning” and therefore conclude “so I won’t eat all the bagels before my wife gets up in the morning”. This shortcut allows us to appear considerate, without doing any deep thought about what other people really know and want.

“OK, now what?”

This will only get you so far. Some occasions call for a proper understanding of other people’s feelings and beliefs. Giving directions is one, but so is understanding myriad aspects of everyday conversation which involve feelings, jokes or suggestions. For illustration, consider the joke that some research has suggested may be the world’s funniest (although what exactly that means is another story):

 

Two hunters are out in the woods when one of them collapses. He doesn’t seem to be breathing and his eyes are glazed. The other guy whips out his phone and calls the emergency services. He gasps, “My friend is dead! What can I do?” The operator says “Calm down. I can help. First, let’s make sure he’s dead.” There is a silence, then a shot is heard. Back on the phone, the guy says “OK, now what?”

 

The joke is funny because you can appreciate that the hunter had two possible interpretations of the operator’s instructions, and chose the wrong one. To appreciate the interpretations you need to have a feel for what the operator and the hunter know and desire (and to be surprised when the hunter’s desire to do anything to help isn’t over-ruled by a desire keep his friend alive).

To do this mental simulation you recruit what psychologists call your “Theory of Mind”, the ability think about others’ beliefs and desires. Our skill at Theory of Mind is one of the things that distinguish humans from all other species – only chimpanzees seem to have anything approaching a true understanding that others’ might believe different things from themselves. Us humans, on the other hand, seem primed from early infancy to practice thinking about how other humans view the world.

The fact that the curse of knowledge exists tells us how hard a problem it is to think about other people’s minds. Like many hard cognitive problems – such as seeing, for example – the human brain has evolved specialist mechanisms which are dedicate to solving it for us, so that we don’t normally have to expend conscious effort. Most of the time we get the joke, just as most of the time we simply open our eyes and see the world.

The good news is that your Theory of Mind isn’t completely automatic – you can use deliberate strategies to help you think about what other people know. A good one when writing is simply to force yourself to check every term to see if it is jargon – something you’ve learnt the meaning of but not all your readers will know. Another strategy is to tell people what they can ignore, as well as what they need to know. This works well with directions (and results in instructions like “keep going until you see the red door. There’s a pink door, but that’s not it”)

With a few tricks like this, and perhaps some general practice, we can turn the concept of reading other people’s minds – what some psychologists call “mind mindfulness” – into a habit, and so improve our Theory of Mind abilities. (Something that most of us remember struggling hard to do in adolescence.) Which is a good thing, since good theory of mind is what makes a considerate partner, friend or co-worker – and a good giver of directions.

BBC Future column: The Psychology Of Tetris

Last week’s BBC Future column. The original is here. There’s a more melancholy and personal version of this column I could have written called ‘I lost years of my life to Sid Meier’s ‘Civiliation’, but since the game is now out on iphone I didn’t have time to write it.

How the secret to the popular game’s success is that it takes advantage of the mind’s basic pleasure in tidying up and uses it against us.

Shapes fall from the sky, all you have to do is to control how they fall and fit within each other. A simple premise, but add an annoyingly addictive electronica soundtrack (based on a Russian folk tune called Korobeiniki, apparently) and you have a revolution in entertainment.

Since Tetris was launched on the world in 1986, millions of hours have been lost through playing this simple game. Since then, we’ve seen games consoles grow in power, and with it the appearance of everything from Call of Duty to World of Warcraft. Yet block and puzzle games like Tetris still have a special place in our hearts. Why are they are so compelling?

The writer Jeffrey Goldsmith was so obsessed with Tetris that he wrote a famous article asking if the game’s creator Alexey Pajitnov had invented “a pharmatronic?” – a video game with the potency of an addictive drug. Some people say that after playing the game for hours they see falling blocks in their dreams or buildings move together in the street – a phenomenon known as the Tetris Effect. Such is its mental pull, there’s even been the suggestion that the game might be able to prevent flashbacks in people with PTSD.

I had my own Tetris phase, when I was a teenager, and spent more hours than I should have trying to align the falling blocks in rows. Recently, I started thinking about why games like Tetris are so compelling. My conclusion? It’s to do with a deep-seated psychological drive to tidy up.

Many human games are basically ritualised tidying up. Snooker, or pool if you are non-British, is a good example. The first person makes a mess (the break) and then the players take turns in potting the balls into the pockets, in a vary particular order. Tetris adds a computer-powered engine to this basic scenario – not only must the player tidy up, but the computer keeps throwing extra blocks from the sky to add to the mess. It looks like a perfect example of a pointless exercise – a game that doesn’t teach us anything useful, has no wider social or physical purpose, but which weirdly keeps us interested.

There’s a textbook psychological phenomenon called the Zeigarnik Effect, named after Russian psychologist Bluma Zeigarnik. In the 1930s, Zeigarnik was in a busy cafe and heard that the waiters had fantastic memories for orders – but only up until the orders had been delivered. They could remember the requests of a party of 12, but once the food and drink had hit the table they forgot about it instantly, and were unable to recall what had been so solid moments before. Zeigarnik gave her name to the whole class of problems where incomplete tasks stick in memory.

The Zeigarnik Effect is also part of the reason why quiz shows are so compelling. You might not care about the year the British Broadcasting Corporation was founded or the percentage of the world’s countries that have at least one McDonald’s restaurant, but once someone has asked the question it becomes strangely irritating not to know the answer (1927 and 61%, by the way). The questions stick in the mind, unfinished until it is completed by the answer.

Game theory

Tetris holds our attention by continually creating unfinished tasks. Each action in the game allows us to solve part of the puzzle, filling up a row or rows completely so that they disappear, but is also just as likely to create new, unfinished work. A chain of these partial-solutions and newly triggered unsolved tasks can easily stretch to hours, each moment full of the same kind of satisfaction as scratching an itch.

The other reason why Tetris works so well is that each unfinished task only appears at the same time as its potential solution – those blocks continuously fall from the sky, each one a problem and a potential solution. Tetris is a simple visual world, and solutions can immediately be tried out using the five control keys (move left, move right, rotate left, rotate right and drop – of course). Studies of Tetris players show that people prefer to rotate the blocks to see if they’ll fit, rather than think about if they’ll fit. Either method would work, of course, but Tetris creates a world where action is quicker than thought – and this is part of the key to why it is so absorbing. Unlike so much of life, Tetris makes an immediate connection between our insight into how we might solve a problem and the means to begin acting on it.

The Zeigarnik Effect describes a phenomenon, but it doesn’t really give any reason for why it happens. This is a common trick of psychologists, to pretend they solved a riddle of the human mind by giving it a name, when all they’ve done is invented an agreed upon name for the mystery rather than solved it. A plausible explanation for the existence of the Effect is that the mind is designed to reorganise around the pursuit of goals. If those goals are met, then the mind turns to something else.

Trivia takes advantage of this goal orientation by frustrating us until it is satisfied. Tetris goes one step further, and creates a continual chain of frustration and satisfaction of goals. Like a clever parasite, Tetris takes advantage of the mind’s basic pleasure in getting things done and uses it against us. We can go along with this, enjoying the short-term thrills in tidying up those blocks, even while a wiser, more reflective, part of us knows that the game is basically purposeless. But then all good games are, right?

Press Release Spam (an interlude)

Sorry to interrupt your normal psych/neuro programming, but this is just a short note to say that I have retired the tom@mindhacks.com email address. If you wish to contact me or Vaughan, please tweet us (details in rightbar).

I’ve retired the email address because of the amount of PR spam I’ve been getting, which has lowered the signal to noise ratio of this account so much it isn’t worth checking anymore. One of the reasons I get so much PR spam is because people like Vocus PR are selling my email address, to publishers and University Press offices, who then send me email about things I’m not interested in. For a while I was collecting the email addresses of these people so I could block them in gmail. My list is here. I invite you to do a search for these addresses and label them spam (warning: this list contains real people from respectable organisations, but since they work in PR I am happy never to hear from them again).

If anyone can think of a good crowdsourced way of breaking the business model of people like Vocus, I’d love to hear from you.

BBC Column: Psychological self-defence for the age of email

My latest column for BBC Future. The original is here. Lots of the points made here apply to technology more generally.

Here’s a pretty safe assumption to make: you probably feel like you’re inundated with email, don’t you? It’s a constant trickle that threatens to become a flood. Building up, it is always nagging you to check it. You put up spam filters and create sorting systems, but it’s never quite enough. And that’s because the big problems with email are not just technical – they’re psychological. If we can understand these we’ll all be a bit better prepared to manage email, rather than let it manage us.

For this psychological self-defence course, we’re going to cover very briefly four fundamental aspects of human reasoning. These are features built into how the human mind works. If you know about them, you can watch out for them and – most importantly – catch yourself when one of these tendencies is leading you astray.

Pay it back

First up is reciprocity – our tendency to want to return like for like, whether that is a smile for a smile or a blow for a blow. Persuasion-guru Robert Cialdini cites reciprocity as being one of the six basic principles of influence: do something for someone, so they’ll feel they have to do something back. Suddenly freebies from salespeople make a lot more sense (and seem a lot more sinister).

Reciprocity works in email because we’re not just sending information through the ether, we’re communicating social information. Each email contains simple meta-messages, things like “I’m interested in what you’re doing”, or “This really matters to me”. Reciprocity means that each email is an invitation to a social encounter, and you know what that means – more emails sent back to you in reply.

Just think back to the last time you were away from email for a week: most likely the majority of the emails waiting for you in your inbox were from the first few days of your absence. Lots of our email is self-generated, responses to emails we’ve sent, a natural reaction oiled by the social grease of reciprocity. And this leads to another aspect of human reasoning, which is…

Reaping rewards

A part of us loves getting email – it provides basic proof that we’re part of society (and often more – it’s concrete evidence that someone wants to talk to us, invite us out, or tell us something). Our animal brains use some simple rules for processing rewards. The most fundamental of these rules is the so called Law of Effect, which simply states that if something is followed by a reward, then animals tend to increase the frequency with which they do it.

But the way email is structured to reach us taps into another basic rule the brain uses for processing reward. Irregular rewards have a special power to enforce repeat behaviour, something discovered by psychologists in the early twentieth century, and known for centuries by people who organise gambling (would anyone play slot machines if they just predictably gave you back 80% of the money you put in each time?).

Email drips into your consciousness during the day. Each time you check it you don’t know if you’ll be getting another boring work email, which isn’t very rewarding, or some exciting news or an opportunity, which is very rewarding. The schedule of these constant opportunities for surprise hooks us into checking email. To avoid it, you just need to fix your email so that you collect it all at once at regular intervals, such as every hour or twice a day, rather than checking each email as it arrives.

Close thrill

Hyperbolic discounting is another feature of how we’re wired to think about rewards. Discounting is the diminishing value of rewards as they get further away in time. It’s the thing that means that being offered 100 euros today is far more exciting than being offered 100 euros in ten years time. That discounting is hyperbolic means a reward that is very close gets drastically more attractive.

To see this, try thinking about whether you’d like 10 euros now or 20 euros in a year’s time. If you’re an impatient person maybe you’ll favour the 10 euros now, if you’re patient you can maybe wait for the 20 euros in a year’s time. But if we shift both rewards backwards in time by 10 years, the choice stops being ambiguous: 10 euros in ten year’s time, or 20 euros in eleven year’s time is an easy call. Almost everyone would go for the second option.

What this shows is that the choice of a smaller amount of money only seemed attractive because it was closer in time. Hyperbolic discounting is why people will pay money to pick up today’s news, but won’t even bend down to pick up yesterday’s news. Immediacy creates value in our brains.

Going back to email, think of a time you didn’t check your email for a week. If you’re like me, you probably opened your email expecting lots of exciting news – a sum of all the excitement you experience with each individual email. But actually, a week’s worth of email isn’t very exciting. The interest that email generates as you see it arriving in your inbox is an illusion generated by hyperbolic discounting. Every technology has its own logic, and part of the logic of email is the speed with which it is delivered, with the new mails always pushing their way to the top of the pile. This pull is as insidious as it is intense – apparently 59% of people surveyed by AOL are so addicted to keeping track of their email that they check it in the bathroom.

This is what makes me think that the very speed of email delivery is a handicap – email delivered with a half-hour delay would be easier to judge at its true value, and so be far less distracting.

Responsibility pressure

Finally, a fourth fundamental principle of human reasoning is our sense of ownership or responsibility. I’ve written recently about how we can be tricked into valuing something more by accidents of fate that put that thing in our possession. Email is prey to this bias: once something is there, it is natural to decide that it deserves our consideration, it is somehow our responsibility to read and respond.

Nowhere is this more apparent than the group email and the avalanche of replies that invariably ensues. Strike back by reminding yourself that not all email has to be replied to, that lots of issues will be – and should be – dealt with by other people. Ask yourself: “If I didn’t have this information in my inbox, would I go out looking for it?” Most of the time the answer is probably “no”, and that’s a sign that someone else is controlling your attention.

Unless you diligently maintain the boundaries of exactly what you are responsible for, email becomes a system for letting other people control your time. So delete that email and move on!

BBC Column: Can glass shape really affect how fast you drink?

My latest column for BBC Future. The original is here. I was hesitant to write this at first, since nobody loves a problemmatiser, but I figured that something in support of team “I think you’ll find its a bit more complicated than that” couldn’t hurt, and there’s an important general point about the way facts about behaviour are built from data in the final paragraphs, and why theory is important.

Recent reports say curved glasses make you drink beer quicker. But, we must be cautious about drawing simple conclusions from single studies.

We all love a neat science story, but even rock solid facts can be less revealing than they seem. Let’s take an example of a piece of psychology research reported recently: the idea that people drink faster from curved glasses.

Hundreds of news sources around the globe covered the findings, many of them changing the story slightly to report that people drink more (rather than faster) from a curved glass. At first it seems like a straightforward piece of psychology research, with clear implications: curved glasses will make pacing yourself harder, so you’ll end up drinking more than you should. Commentators agreed with the research (funded by Alcohol Research UK) – beverage manufacturers were probably onto this before, and will now be rushing to make us take our favourite tipple out of a curved glass.

But before we change our drinking habits or restock our glass collections, let’s look at what the scientists actually did.

Luckily for us the team of researchers from the University of Bristol, UK, published their paper in an open access journal, which means the research details are free for all to read.

The Bristol team invited participants into the lab and asked them to drink lager (or lemonade) from a straight class or a curved one, while watching a nature documentary (a BBC one, I’m happy to report). They also asked their volunteers to judge when the glass was half full. The results of both were clear, participants finished their drink of lager sooner in the curved glass. They also judged the halfway point as being lower down the curved glass than the straight glass – suggesting a reason for the faster drinking: if people thought the glass was fuller than it really was when then they would underestimate the rate at which they were drinking.

Human factor

Now this is all well and good, but there are many reasons why the results don’t mean that we can make people drink more by changing the shape of their glass. Importantly, none of these reasons would have to do with this research being wrong or inexpertly done. I’m absolutely certain that if we did the study ourselves we’d find exactly the same thing.

No, the reasons you can’t jump to conclusions from this kind of study is because, inevitably, a single study can only test one aspect of the world, under one set of circumstances. This makes it hard to draw general conclusions of the sort that get reported. Notice how the psychologists measured one thing (rate of drinking lager), for just two different glasses, over a single drink, for one set of people (volunteers in Bristol, in 2012), and yet a generalised truth stating that “people drink more from curved glasses” emerged from a specific set of circumstances.

Now obviously, the aim of science is to come up with answers to questions that become generalised truths, but psychology is a domain in which it is fiendishly hard to establish them. If you are studying a simple system, then cause and effect is relatively easy to establish. For instance, the harder you throw a rock, the further it tends to travel in distance. The relation between the force you put in and the acceleration of the rock you get out is straightforward. Add a human factor into the equation, however, and such simple relations begin to disappear. (Please don’t experiment by throwing rocks at people.)

To see how this limits the conclusions that can be drawn from the drinks study, think of even the most trivial factor that could change these results. Would you get the same result if people drank ale rather than lager? Probably. If they drank two pints rather than one? Maybe. If they drank in groups rather than watching TV (arguably closer to the circumstances of most drinking)? Who knows! It seems to me perfectly plausible that a social situation would produce different effects than a solo-drinking experience.

We could carry on. Would the effect be the same if we tried it in Minneapolis? In Lagos? In Kuala Lumpur, Reykjavik or Alice Springs? Most psychology studies are carried out on urban, affluent, students of the western world – a culturally unusual group, if you take a global or historical perspective. All the subjects studied were “social drinkers”, presumably with some learnt associations about curved and straight glasses. Maybe the Brits had learnt that expensive beer came in curved glasses. If this is the case, the result might be true for everyone who has a history of drinking from straight glasses in the UK, but not for other cultures where alcohol isn’t drunk like that.

Little things, big effect

Software entrepreur Jim Manzi calls the rate at which small changes can have surprising effects on outcomes, and the consequent difficulty in drawing general conclusions, “causal density”. It’s because human psychology and social life is so causally dense that we can’t simply take straight reports that X affects Y and apply them across the board. But there are hundreds of these relationships reported all the time from the annals of psychology: glass shape affects drinking time, taller men are better paid, holding a hot drink makes you like someone, and so on. Surface effects like these are vulnerable to small changes in circumstances that might remove, or even reverse, the effect you’re relying on.

Psychology researchers know all these arguments, and that’s why they’re cautious about drawing simple conclusions from single studies. The challenge of psychology is to track down those results that actually do generalise across different situations.

The way to do this is to report findings that are about theories, not just about effects. The Bristol researchers show the way in their paper: as well as testing drinking speed, they relate it to people’s ability to estimate how full a glass is. They could have just measured drinking speed, but they knew they had to relate it to a theory about what people really believed to come up with a strong conclusion.

If we can find the right principles that affect people’s actions, then we can draw conclusions that cut across situations. Unless we know the reasons why someone does something, we’ll be tricked time and time again when we try to infer from what they do.