Why the other queue always seem to move faster than yours

Whether it is supermarkets or traffic, there are two possible explanations for why you feel the world is against you, explains Tom Stafford.

Sometimes I feel like the whole world is against me. The other lanes of traffic always move faster than mine. The same goes for the supermarket queues. While I’m at it, why does it always rain on those occasions I don’t carry an umbrella, and why do wasps always want to eat my sandwiches at a picnic and not other people’s?

It feels like there are only two reasonable explanations. Either the universe itself has a vendetta against me, or some kind of psychological bias is creating a powerful – but mistaken – impression that I get more bad luck than I should. I know this second option sounds crazy, but let’s just explore this for a moment before we get back to the universe-victim theory.

My impressions of victimisation are based on judgements of probability. Either I am making a judgement of causality (forgetting an umbrella makes it rain) or a judgement of association (wasps prefer the taste of my sandwiches to other people’s sandwiches). Fortunately, psychologists know a lot about how we form impressions of causality and association, and it isn’t all good news.

Our ability to think about causes and associations is fundamentally important, and always has been for our evolutionary ancestors – we needed to know if a particular berry makes us sick, or if a particular cloud pattern predicts bad weather. So it isn’t surprising that we automatically make judgements of this kind. We don’t have to mentally count events, tally correlations and systematically discount alternative explanations. We have strong intuitions about what things go together, intuitions that just spring to mind, often after very little experience. This is good for making decisions in a world where you often don’t have enough time to think before you act, but with the side-effect that these intuitions contain some predictable errors.

One such error is what’s called “illusory correlation”, a phenomenon whereby two things that are individually salient seem to be associated when they are not. In a classic experiment volunteers were asked to look through psychiatrists’ fabricated case reports of patients who had responded to the Rorschach ink blot test. Some of the case reports noted that the patients were homosexual, and some noted that they saw things such as women’s clothes, or buttocks in the ink blots. The case reports had been prepared so that there was no reliable association between the patient notes and the ink blot responses, but experiment participants – whether trained or untrained in psychiatry – reported strong (but incorrect) associations between some ink blot signs and patient homosexuality.

One explanation is that things that are relatively uncommon, such as homosexuality in this case, and the ink blot responses which contain mention of women’s clothes, are more vivid (because of their rarity). This, and an effect of existing stereotypes, creates a mistaken impression that the two things are associated when they are not. This is a side effect of an intuitive mental machinery for reasoning about the world. Most of the time it is quick and delivers reliable answers – but it seems to be susceptible to error when dealing with rare but vivid events, particularly where preconceived biases operate. Associating bad traffic behaviour with ethnic minority drivers, or cyclists, is another case where people report correlations that just aren’t there. Both the minority (either an ethnic minority, or the cyclists) and bad behaviour stand out. Our quick-but-dirty inferential machinery leaps to the conclusion that the events are commonly associated, when they aren’t.

So here we have a mechanism which might explain my queuing woes. The other lanes or queues moving faster is one salient event, and my intuition wrongly associates it with the most salient thing in my environment – me. What, after all, is more important to my world than me. Which brings me back to the universe-victim theory. When my lane is moving along I’m focusing on where I’m going, ignoring the traffic I’m overtaking. When my lane is stuck I’m thinking about me and my hard luck, looking at the other lane. No wonder the association between me and being overtaken sticks in memory more.

This distorting influence of memory on our judgements lies behind a good chunk of my feelings of victimisation. In some situations there is a real bias. You really do spend more time being overtaken in traffic than you do overtaking, for example, because the overtaking happens faster. And the smoke really does tend follow you around the campfire, because wherever you sit creates a warm up-draught that the smoke fills. But on top of all of these is a mind that over-exaggerates our own importance, giving each of us the false impression that we are more important in how events work out than we really are.

This is my BBC Future post from last Tuesday. The original is here.

Are classical music competitions judged on looks?

Looking at the evidence behind a recent news story

The headlines

The Los Angeles Times: People trust eyes – not ears – when judging musicians

Classic FM: Classical singers judged by actions not voice

Nature: Musicians’ appearances matter more than their sound

The story

If you wanted to pick out the musician who won a prestigious classical music competition would you listen to a clip of them playing or watch a silent video of them performing the same piece of music?

Most of us would go for an audio clip rather than video, and we’d be wrong. In a series of experiments, Chia-Jung Tsay from University College London, showed that both novices and expert musicians were better able to pick out the winners when they watched rather than listened to them.

The moral, we’re told, is that how you look is more important than how you sound, even in elite classical music competitions.

What they actually did

Dr Tsay, herself a classically trained musician, used footage from real international classical music competitions. She took the top three finalists and asked volunteers to pick out the real winner – with a cash incentive – by looking at video without sound, sound without video, or both.

Over a series of experiments she showed that people think that audio will be more informative than video, but actually people are able to pick the real winner when watching video clips. But they aren’t able to do this when listening to audio clips (these test subjects only perform at the level of chance). The shocking thing is that when people get sound and video clips, which notionally contain more information, they still perform at chance. The implication being that they would do better if they could block their ears and ignore the sound.

Follow up experiments suggested that people’s ability to pick winners depended on their being able to pick out things associated with “stage presence”. A video reduced to line drawings, designed to remove details and emphasise motion, still allowed people to pick out winners at an above chance rate. Another experiment showed that asking people to identify the “most confident, creative, involved, motivated, passionate, and unique performer” tallied with the real winners.

How plausible is this?

We’re a visual species. How things look really matters, as everyone who has dressed up for an interview knows. It’s also not uncommon for us to be misled into believing that how something looks isn’t as important as it really is (here’s an example: judging wine by the labels rather than the taste).

What is less plausible is the spin put on the story by the headlines. We all know that looks are important, but how can they really be more important than sound in a classical music competition? The most important thing really is the sound, but this research resonates with a popular cliché about how irrational we are.

Tom’s take

The secret to why these experiments give the results they do is in this detail: the judgement that people were asked to make was between the top three finalists in prestigious international competitions. In other words, each of these musicians is among the best in the world at what they do. The best of the best even.

In all probability there is a minute difference between their performances on any scale of quality. The paper itself admits that the judges themselves often disagree about who the winner is in these competitions.

The experimental participants were not scored according to some abstract ability to measure playing quality, but according to how well they were able to match real-world competition outcome.

The experiments show that matching the judges in these competitions can be done based on sight but not on sound. This isn’t because sight reveals playing quality, but because sight gives the experimental participants similar biases to the real judges. The real expert judges are biased by how the performers look – and why not, since there is probably so little to choose between them in terms of how they sound?

This is why the conclusion, spelt out in the original paper, is profoundly misleading: “The findings demonstrate that people actually depend primarily on visual information when making judgements about music performance”. It remains completely plausible that most of us, most of the time, judge music on how it sounds, just like we assumed before this research came out.

In ambiguous cases we might rely on looks over sounds – even the experts among us. This is a blow to musicians who thought it was always just about sound – but isn’t a revelation to the rest of us who knew that when choices are hard, whether during the job interview or the music competition, looks matter.

Read more

The original paper: Sight over sound in the judgment of music performance. Tsay, C-J (2013), Proceedings of the National Academy of Sciences

Special mention for the BBC and reporter Melissa Hogenboom who were the only people, as far as I know, who managed to report this story with an accurate headline: Sight dominates sound in music competition judging

The interaction between the senses is an active and fascinating research area. Read more from the Crossmodal Research Laboratory at the Univeristy of Oxford and Cross-modal perception of music network at the University of Sheffield

The Conversation

This article was originally published at The Conversation.
Read the original article.

The deafening silence

All silences are not equal, some seem quieter than others. Why? It’s all to do with the way our brains adapt to the world around us, as Tom Stafford explains

A “deafening silence” is a striking absence of noise, so profound that it seems to have its own quality. Objectively it is impossible for one silence to be any different from another. But the way we use the phrase hints at a psychological truth.

The secret to a deafening silence is the period of intense noise that comes immediately before it. When this ends, the lack of sound appears quieter than silence. This sensation, as your mind tries to figure out what your ears are reporting, is what leads us to call a silence deafening.

What is happening here is a result of a process called adaptation. It describes the moving baseline against which new stimuli are judged. The way the brain works is that any constant simulation is tuned out, allowing perception to focus on changes against this background, rather than absolute levels of stimulation. Turn your stereo up from four to five and it sounds louder, but as your memory of making the change rapidly fades, your mind adjusts and volume five becomes the new normal.

Adaptation doesn’t just happen for hearing. The brain networks that process all other forms of sensory information also pull the same trick. Why can’t you see the stars during the daytime? They are still there, right? You can’t see them because your visual system has adapted to the light levels from the sun, making the tiny variation in light that a star makes against the background of deep space invisible. Only after dark does your visual system adapt to a baseline at which the light difference created by a star is meaningful.

Just as adaption applies across different senses, so too does the after-effect, the phenomenon that follows it. Once the constant stimulation your brain has adapted to stops, there is a short period when new stimuli appear distorted in the opposite way from the stimulus you’ve just been experiencing. A favourite example is the waterfall illusion. If you stare at a waterfall (here’s one) for half a minute and then look away, stationary objects will appear to flow upwards. You can even pause a video and experience the illusion of the waterfall going into reverse.

It’s a phenomenon called the motion after effect. You can get them for colour perception or for just lightness-darkness (which is why you sometimes see dark spots after you’ve looked at the sun or a camera flash).

After-effects also apply to hearing, which explains why a truly deafening silence comes immediately after the brain has become adapted to a high baseline of noise. We perceive this lack of sound as quieter than other silences for the same reason that the waterfall appears to suck itself upwards.

So while it is true that all silences are physically the same, perhaps Spinal Tap lead guitarist Nigel Tufnel was onto something with his amplifier dials that go up to 11. When it comes to the way we perceive volume, it is sometimes possible to drop below zero.

This was my BBC Future from last weekend. The original is here.

What makes the ouija board move

The mystery isn’t a connection to the spirit world, but why we can make movements and yet not realise that we’re making them.

Ouija board cups and dowsing wands – just two examples of mystical items that seem to move of their own accord, when they are really being moved by the people holding them. The only mystery is not one of a connection to the spirit world, but of why we can make movements and yet not realise that we’re making them.

The phenomenon is called the ideomotor effect and you can witness it yourself if you hang a small weight like a button or a ring from a string (ideally more than a foot long). Hold the end of the string with your arm out in front of you, so the weight hangs down freely. Try to hold your arm completely still. The weight will start to swing clockwise or anticlockwise in small circles. Do not start this motion yourself. Instead, just ask yourself a question – any question – and say that the weight will swing clockwise to answer “Yes” and anticlockwise for “No”. Hold this thought in mind, and soon, even though you are trying not to make any motion, the weight will start to swing in answer to your question.

Magic? Only the ordinary everyday magic of consciousness. There’s no supernatural force at work, just tiny movements you are making without realising. The string allows these movements to be exaggerated, the inertia of the weight allows them to be conserved and built on until they form a regular swinging motion. The effect is known as Chevreul’s Pendulum, after the 19th Century French scientist who investigated it.

What is happening with Chevreul’s Pendulum is that you are witnessing a movement (of the weight) without “owning” that movement as being caused by you. The same basic phenomenon underlies dowsing – where small movements of the hands cause the dowsing wand to swing wildly – or the Ouija board, where multiple people hold a cup and it seems to move of its own accord to answer questions by spelling out letters. The effect also underlies the sad case of “facilitated communication“, a fad whereby carers believed they could help severely disabled children communicate by guiding their fingers around a keyboard. Research showed that the carers – completely innocently – were typing the messages themselves, rather than interpreting movements from their charges.

The interesting thing about the phenomenon is what it says about the mind. That we can make movements that we don’t realise we’re making suggests that we shouldn’t be so confident in our other judgements about what movements we think are ours. Sure enough, in the right circumstances, you can get people to believe they have caused things that actually come from a completely independent source (something which shouldn’t surprise anyone who has reflected on the madness of people who claim that it only started raining because they forget an umbrella).

You can read what this means for the nature of our minds in The Illusion of Conscious Will by psychologist Daniel Wegner, who sadly died last month. Wegner argued that our normal sense of owning an action is an illusion, or – if you will – a construction. The mental processes which directly control our movements are not connected to the same processes which figure out what caused what, he claimed. The situation is not that of a mental command-and-control structure like a disciplined army; whereby a general issues orders to the troops, they carry out the order and the general gets back a report saying “Sir! We did it. The right hand is moving into action!”. The situation is more akin to an organised collective, claims Wegner: the general can issue orders, and watch what happens, but he’s never sure exactly what caused what. Instead, just like with other people, our consciousness (the general in this metaphor) has to apply some principles to figure out when a movement is one we’ve made.

One of these principles is that cause has to be consistent with effect. If you think “I’ll move my hand” and your hand moves, you’re likely to automatically get the feeling that the movement was one you made. The principle is broken when the thought is different from the effect, such as with Chevreul’s Pendulum. If you think “I’m not moving my hand”, you are less inclined to connect any small movements you make with such large visual effects. This maybe explains why kids can shout “It wasn’t me!” after breaking something in plain sight. They thought to themselves “I’ll just give this a little push”, and when it falls off the table and breaks it doesn’t feel like something they did.

This is my column for BBC Future from a few weeks back. The original is here. It’s a Dan Wegner tribute column really – Rest in Peace, Dan

What makes an extravert?

Why do some people prefer adventure and the company of others, while others favour being alone? It’s all to do with how the brain processes rewards.

Will you spend Saturday night in a crowded bar, or curled up with a good book? Is your ideal holiday adventure sports with a large group of mates and, or anywhere more sedate destination with a few good friends? Maybe your answers to these questions are clear – you’d love one option and hate another – or maybe you find yourself somewhere between the two extremes. Whatever your answers, the origin of your feelings may lie in how your brain responds to rewards.

We all exist somewhere on the spectrum between extroverts and introverts, and different circumstances can make us feel more one way or the other. Extraverts, a term popularised by psychologist Carl Jung at the beginning of the 20th Century, seem to dominate our world, either because they really are more common, or because they just make most of the noise. (The original spelling of “extravert” is now rarely used generally, but is still used in psychology.) This is so much the case that some have even written guides on how to care for introverts, and nurture their special talents.

A fundamental question remains – what makes an extrovert? Why are we all different in this respect, and what do extraverts have in common that makes them like they are? Now, with brain scans that can record activity from deep within the brain, and with genetic profiling that reveals the code behind the constructions of the chemical signalling system used by the brain, we can put some answers to these decades-old questions.

In the 1960s, psychologist Hans Eysenck made the influential proposal that extroverts were defined by having a chronically lower level of arousal. Arousal, in the physiological sense, is the extent to which our bodies and minds are alert and ready to respond to stimulation. This varies for us all throughout the day (for example, as I move from asleep to awake, usually via few cups of coffee) and in different circumstances (for example, cycling through the rush-hour keeps you on your toes, heightening arousal, whereas a particularly warm lecture theatre tends to lower your arousal). Eysenck’s theory was that extroverts have just a slightly lower basic rate of arousal. The effect is that they need to work a little harder to get themselves up to the level others find normal and pleasant without doing anything. Hence the need for company, seeking out novel experiences and risks. Conversely, highly introverted individuals find themselves overstimulated by things others might find merely pleasantly exciting or engaging. Hence they seek out quiet conversations about important topics, solitary pursuits and predictable environments.

Betting brains

More recently, this theory has been refined, linking extroversion to the function of dopamine, a chemical that plays an intimate role in the brain circuits which control reward, learning and responses to novelty. Could extroverts differ in how active their dopamine systems are? This would provide a neat explanation for the kinds of behaviours extroverts display, while connecting it to an aspect of brain function that we know quite a lot about for other reasons.

Researchers lead by Michael Cohen, now of the University of Amsterdam, were able to test these ideas in a paper published in 2005. They asked participants to perform a gambling task while in the brain scanner. Before they went in the scanner each participant filled out a personality profile and contributed a mouth swab for genetic analysis. Analysis of the imaging data showed how the brain activity differed between extroverted volunteers and introverted ones. When the gambles they took paid off, the more extroverted group showed a stronger response in two crucial brain regions: the amygdala and the nucleus accumbens. The amygdala is known for processing emotional stimuli, and the nucleus accumbens is a key part of the brain’s reward circuitry and part of the dopamine system. The results confirm the theory – extroverts process surprising rewards differently.

When Cohen’s group looked at the genetic profiles of the participants, they found another difference in reward-related brain activity. Those volunteers who had a gene known to increase the responsiveness of the dopamine system also showed increased activity when they won a gamble.

So here we see part of the puzzle of why we’re all different in this way. Extrovert’s brains respond more strongly when gambles pay off. Obviously they are going to enjoy adventure sports more, or social adventures like meeting new people more. Part of this difference is genetic, resulting from the way our genes shape and develop our brains. Other results confirm that dopamine function is key to this – so, for example, genes that control dopamine function predict personality differences in how much people enjoy the unfamiliar and actively seek out novelty. Other results show how extroverts learn differently, in keeping with a heighted sensitivity to rewards due to their reactive dopamine systems.

Our preferences are shaped by the way our brains respond to the world. Maybe this little bit of biological psychology can help us all, whether introverts or extroverts, by allowing us to appreciate how and why others might like different things from us.

This is my BBC Future column from last week. The original is here

Why you think your phone is vibrating when it is not

Most of us experience false alarms with phones, and as Tom Stafford explains this happens because it is a common and unavoidable part of healthy brain function.

Sensing phantom phone vibrations is a strangely common experience. Around 80% of us have imagined a phone vibrating in our pockets when it’s actually completely still. Almost 30% of us have also heard non-existent ringing. Are these hallucinations ominous signs of impending madness caused by digital culture?

Not at all. In fact, phantom vibrations and ringing illustrate a fundamental principle in psychology.

You are an example of a perceptual system, just like a fire alarm, an automatic door, or a daffodil bulb that must decide when spring has truly started. Your brain has to make a perceptual judgment about whether the phone in your pocket is really vibrating. And, analogous to a daffodil bulb on a warm February morning, it has to decide whether the incoming signals from the skin near your pocket indicate a true change in the world.

Psychologists use a concept called Signal Detection Theory to guide their thinking about the problem of perceptual judgments. Working though the example of phone vibrations, we can see how this theory explains why they are a common and unavoidable part of healthy mental function.

When your phone is in your pocket, the world is in one of two possible states: the phone is either ringing or not. You also have two possible states of mind: the judgment that the phone is ringing, or the judgment that it isn’t. Obviously you’d like to match these states in the correct way. True vibrations should go with “it’s ringing”, and no vibrations should go with “it’s not ringing”. Signal detection theory calls these faithful matches a “hit” and a “correct rejection”, respectively.

But there are two other possible combinations: you could mismatch true vibrations with “it’s not ringing” (a “miss”); or mismatch the absence of vibrations with “it’s ringing” (a “false alarm”). This second kind of mismatch is what’s going on when you imagine a phantom phone vibration.

For situations where easy judgments can be made, such as deciding if someone says your name in a quiet room, you will probably make perfect matches every time. But when judgments are more difficult – if you have to decide whether someone says your name in a noisy room, or have to evaluate something you’re not skilled at – mismatches will occasionally happen. And these mistakes will be either misses or false alarms.

Alarm ring

Signal detection theory tells us that there are two ways of changing the rate of mismatches. The best way is to alter your sensitivity to the thing you are trying to detect. This would mean setting your phone to a stronger vibration, or maybe placing your phone next to a more sensitive part of your body. (Don’t do both or people will look at you funny.) The second option is to shift your bias so that you are more or less likely to conclude “it’s ringing”, regardless of whether it really is.

Of course, there’s a trade-off to be made. If you don’t mind making more false alarms, you can avoid making so many misses. In other words, you can make sure that you always notice when your phone is ringing, but only at the cost of experiencing more phantom vibrations.

These two features of a perceiving system – sensitivity and bias – are always present and independent of each other. The more sensitive a system is the better, because it is more able to discriminate between true states of the world. But bias doesn’t have an obvious optimum. The appropriate level of bias depends on the relative costs and benefits of different matches and mismatches.

What does that mean in terms of your phone? We can assume that people like to notice when their phone is ringing, and that most people hate missing a call. This means their perceptual systems have adjusted their bias to a level that makes misses unlikely. The unavoidable cost is a raised likelihood of false alarms – of phantom phone vibrations. Sure enough, the same study that reported phantom phone vibrations among nearly 80% of the population also found that these types of mismatches were particularly common among people who scored highest on a novelty-seeking personality test. These people place the highest cost on missing an exciting call.

The trade-off between false alarms and misses also explains why we all have to put up with fire alarms going off when there isn’t a fire. It isn’t that the alarms are badly designed, but rather that they are very sensitive to smoke and heat – and biased to avoid missing a real fire at all costs. The outcome is a rise in the number of false alarms. These are inconvenient, but nowhere near as inconvenient as burning to death in your bed or office. The alarms are designed to err on the side of caution.

All perception is made up of information from the world and biases we have adjusted from experience. Feeling a phantom phone vibration isn’t some kind of pathological hallucination. It simply reflects our near-perfect perceptual systems trying their best in an uncertain and noisy world.

This article was originally published on BBC Future. The original is here.

‘digital dementia’ lowdown – from The Conversation

The Headlines

The Telegraph: Surge in ‘digital dementia’

The Daily Mail: ‘Digital dementia’ on the rise as young people increasingly rely on technology instead of their brain

Fox News: Is ‘digital dementia’ plaguing teenagers?

The Story

South Korea has the highest proportion of people with smartphones, 67%. Nearly 1 in 5 use their phone for more than 7 hours in a day, it is reported. Now a doctor in Seoul reports that teenagers are reporting with symptoms more normally found in those with head injury or psychiatric illness. He claims excessive smartphone use is leading to asymmetrical brain development, emotional stunting and could “in as many as 15 per cent of cases lead to the early onset of dementia”.

What they actually did

Details from the news stories are sketchy. Dr Byun Gi-won, in Seoul, provided the quotes, but it doesn’t seem as if he has published any systematic research. Perhaps the comments are based on personal observation?

The Daily Mail quotes an article which reported that 14% of young people felt that their memory was poor. The Mail also contains the choice quote that “[Doctors] say that teenagers have become so reliant on digital technology they are no longer able to remember everyday details such as their phone numbers.”

How plausible is this?

It is extremely plausible that people should worry about their memories, or that doctors should find teenagers uncooperative, forgetful and inattentive. The key question is whether our memories, or teenagers’ cognitive skills, are worse than they ever have been – and if smart phones are to blame for this. The context for this story is a recurring moral panic about young people, new forms of technology and social organisation.

For a long time it was TV, before that it was compulsory schooling (“taking kids out of their natural environment”). When the newspaper became common people complained about the death of conversation. Plato even complained that writing augured the death of memory and understanding). The story also draws on the old left brain-right brain myth, which – despite being demonstrably wrong – will probably never die.

Tom’s take

Of course, it is possible that smartphones (or the internet, or TV, or newspapers, or writing) could damage our thinking abilities. But all the evidence suggest the opposite, with year by year and generation-by-generation rises found in IQ scores. One of the few revealing pieces of research in this area showed that people really are more forgetful of information they know can be easily retrieved, but actually better able to remember where to find that information again.

This isn’t dementia, but a completely normally process of relying on our environment to store information for us. You can see the moral panic driving these stories reflected in the use of that quote about teenagers not being able to remember phone numbers. So what! I can’t remember phone numbers any more – because I don’t need to. The only evidence for dementia in these stories is the lack of critical thought from the journalists reporting them.

Read more

Vaughan Bell on a media history of information scares.

Christian Jarret on Why the Left-Brain Right-Brain Myth Will Probably Never Die

The Conversation

This article was originally published at The Conversation.
Read the original article.

The Connected Brain: Edinburgh

From Flickr, click for link

I’m giving at talk at the Edinburgh festival on August 9th, called The Connected Brain. It will be at Summerhall (Fringe Venue 26 during the festival), cost £3, and here is the blurb:

Headlines often ask if facebook is making us shallow, or google eroding our memories. In this talk we will look “under the hood” of research on how digital technology is affecting us. We will try and chart a course between moral panic and techno-utopianism to reveal the real risks of technology and show how we can cement the great opportunities that it presents for the human mind.

The talk will be similar to the one I did in London recently at the School of Life. Ben Martynoga wrote up some details of that talk, which you can find here. The ideas in the talk involve using some examples from the Mind Hacks book to illustrate some principles of how the mind works, looking at the extended mind hypothesis and reminding ourselves of some of the history of moral panics around information technologies, which Vaughan has written so engagingly and often about (thanks Vaughan!). The place I get to, which is where I’m at with my thinking and where I hope to start a discussion with the audience, is that, rather than panic about technology making us dumb, distracted and alone, we need to identify the principles which will help us design technology which makes use smart, able to concentrate and empathetic.

So that’s, me + Edinburgh + August the 9th. Link for tickets : The Connected Brain

Workout music and your supplementary motor cortex

Why do we like to listen to tunes when we exercise? Psychologist Tom Stafford searches for answers within our brains, not the muscles we are exercising.

Perhaps you have a favourite playlist for going to the gym or the park. Even if you haven’t, you’re certain to have seen joggers running along with headphones in their ears. Lots of us love to exercise to music, feeling like it helps to reduce effort and increase endurance. As a psychologist, the interesting thing for me is not just whether music helps when exercising, but how it helps.

One thing is certain, the answer lies within our brains, not the muscles we are exercising. A clue comes from an ingenious study, which managed to separate the benefits of practicing a movement from the benefits of training the muscle that does the movement. If you think that sounds peculiar, several studies have shown that the act of imagining making a movement produces significant strength gains. The benefit isn’t a big as if you practiced making the movement for real, but still the benefit of thinking about the movement can account for over half of the benefit of practice. So asking people to carry out an imaginary practice task allows us to see the benefit of just thinking about a movement, and separates this from the benefit of making it.

Imaginary practice helps because it increases the strength of the signal sent from the movement areas of the brain to the muscles. Using electrodes you can record the size of this signal, and demonstrate that after imaginary practice people are able to send a stronger, more coherent signal to the muscles.

The signals to move the muscles start in an area of the brain called, unsurprisingly, the motor cortex. It’s in the middle near the top. Part of this motor area is known as the supplementary motor cortex. Originally thought to be involved in more complex movements, this area has since been shown to be particularly active at the point we’re planning to make a movement, and especially crucial for the timing of these actions. So, this specific part of the brain does a very important job during exercise, it is responsible for deciding exactly when to act. Once you’ve realised that a vital part of most sporting performance is not just how fast or how strong you can move, but the effort of deciding when to move, then you can begin to appreciate why music might be so helpful.

The benefits of music are largest for self-paced exercise – in other words those sports where some of the work involved is in deciding when to act, as well as how to act. This means all paced exercises, like rowing or running, rather than un-paced exercises like judo or football. My speculation is that music helps us perform by taking over a vital piece of the task of moving, the rhythm travels in through our ears and down our auditory pathways to the supplementary motor area. There it joins forces with brain activity that is signalling when to move, helping us to keep pace by providing an external timing signal. Or to use a sporting metaphor, it not only helps us out of the starting blocks but it helps to keep us going until we reach the line.

Of course there are lots of other reasons we might exercise to music. For example, a friend of mine who jogs told me: “I started running to music so I didn’t have to listen to my own laboured breathing.” He might well have started for that reason, but now I’ll bet the rhythm of the music he listens to helps him keep pace through his run. As one song might have put it, music lets us get physical.

This is my BBC Future column from last week. The original had the much more accessible title of “The Psychology of Workout music“, but mindhacks.com is our site (dammit) and I can re-title how I want.

Is social psychology really in crisis?

My latest ‘behind the headlines’ column for The Conversation. Probably all old news for you wised-up mindhacks.com readers, but here you go:

The headlines

Disputed results a fresh blow for social psychology

Replication studies: Bad copy

The story

Controversy is simmering in the world of psychology research over claims that many famous effects reported in the literature aren’t reliable, or may even not exist at all.

The latest headlines follow the publication of experiments which failed to replicate a landmark study by Dutch psychologist Ap Dijksterhuis. These experiments are examples of what psychologists call “social priming”, which is a phenomenon where people who are exposed to ideas unconsciously incorporate them into their behaviour. So people who are reminded of old age are reported to walk slower, and people asked to think about university professors do better on a trivial pursuit knowledge test.

What they actually did

The first of Dijksterhuis’ original experiments asked people to think about the typical university professor and list on paper their appearance, lifestyle and behaviours. After this they answered 42 questions taken from Trivial Pursuit.

The experiment found that people who had thought about professors scored 10% higher than people who hadn’t been primed in this way. In this latest report, David Shanks, Head of the Division of Psychology and Language Sciences at University College London and colleagues tried to replicate this effect in nine separate experiments. They didn’t find the effect in any of their experiments, which they suggest calls into question the validity of the original research.

How plausible is it

It’s extremely plausible that people are influenced by recent activities and thoughts – the concept of priming is beyond question, having been supported by decades of research.

What’s less established is whether these effects are really “unconscious” (whatever that means) and whether sophisticated concepts like intelligence can really worm their way into our behaviour in such a profound way.

Tom’s take

The headline reporting of this spat is misleading – there’s nothing worrying about disputed results for social psychology. The process of affirming, disputing and denying results is part of the normal part of science. What is worrying is that this failed replication comes on top of other failed replications of famous social priming results and after the discovery of some high profile frauds in psychology, such as Diederik Stapel.

This has led some to talk of a crisis in experimental social psychology, centring on whether standards of research in the area have slipped enough to allow false results to become easily accepted.

The whole situation is a wonderful opportunity to see “under the hood” of science and see how it really works (rather than how we’re taught it should work). Everything is in the mix: fundamental conceptual disagreements (about the nature of unconscious processing), disciplinary tribalism (between cognitive psychologists and social psychologists), big dog personalities and emotions running high, academic fashion creating a scientific “bubble” (this is that bubble bursting) and soul-searching questions about whether our methods as researchers are fit for purpose.

My guess is that, when the dust settles, we’ll find out that priming effects can work – but they aren’t as strong or common as reported. I have faith that most effects reported in the literature will turn out to true in some form – the vast majority of psychologists are honest and methodical – but we also know for sure than some effects will turn out to have been chimeras, we just can’t say for sure in advance which.

The really interesting aspects to the debate, from my point of view, is going to be clarifying exactly how unconscious these effects are. My prejudice is that social psychologists have been overly casual about using that word, using it in circumstances which would contradict the way most people use it, whether they’re psychologists or not.

Read more

Shanks, D. R., Newell, B. R., Lee, E. H., Balakrishnan, D., Ekelund, L., Cenac, Z., Fragkiski, K. & Moore, C. (2013). Priming Intelligent Behavior: An Elusive Phenomenon, PloS one, 8(4), e56515.

Ed Yong on Bargh’s response to another failure to replicate

Rolf Zwaan on the theory of social priming

Rolf Zwaan on replication done right

Tom Stafford does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.

When giving reasons leads to worse decisions

We’re taught from childhood how important it is to explain how we feel and to always justify our actions. But does giving reasons always make things clearer, or could it sometimes distract us from our true feelings?

One answer came from a study led by psychology professor Timothy Wilson at the University of Virginia, which asked university students to report their feelings, either with or without being asked to provide reasons. What they found revealed just how difficult it can be to reliably discern our feelings when justifying our decisions.

Participants were asked to evaluate five posters of the kind that students might put up in their bedrooms. Two of the posters were of art – one was Monet’s water lilies, the other Van Gogh’s irises. The other three posters were a cartoon of animals in a balloon and two posters of photographs of cats with funny captions.

All the students had to evaluate the posters, but half the participants were asked to provide reasons for liking or disliking them. (The other half were asked why they chose their degree subject as a control condition.) After they had provided their evaluations the participants were allowed to choose a poster to take home.

So what happened? The control group rated the art posters positively (an average score of around 7 out of 9) and they felt pretty neutral about the humorous posters (an average score of around 4 out of 9). When given a choice of one poster to take home, 95% of them chose one of the art posters. No surprises there, the experimenters had already established that in general most students preferred the art posters.

But the group of students who had to give reasons for their feelings acted differently. This “reasons” group liked the art posters less (averaging about 6 out of 9) and the humorous posters more (about 5 to 6 out of 9). Most of them still chose an art poster to take home, but it was a far lower proportion – 64% – than the control group. That means people in this group were about seven times more likely to take a humorous poster home compared with the control group.

Here’s the twist. Some time after the tests, at the end of the semester, the researchers rang each of the participants and asked them questions about the poster they’d chosen: Had they put it up in their room? Did they still have it? How did they feel about it? How much would they be willing to sell it for? The “reasons” group were less likely to have put their poster up, less likely to have kept it up, less satisfied with it on average and were willing to part with it for a smaller average amount than the control group. Over time their reasons and feelings had shifted back in line with those of the control group – they didn’t like the humorous posters they had taken home, and so were less happy about their choice.

Trivial pursuit

The source of this effect, according to the researchers, is that when prompted to give reasons the participants focused on things that were easy to verbalise; they focused on the bright colours, or funny content of the humorous posters. It’s less easy to say exactly what’s pleasing about the more complex art classics. This was out of step with their feelings, so in the heat of the moment participants adjusted their feelings (a process I’ve written about before, called cognitive dissonance). After having the posters on their wall, the participants realised that they really did prefer the art posters all along.

The moral of the story isn’t that intuition is better than reason. We all know that in some situations our feelings are misleading and it is better to think about what we’re doing. But this study shows the reverse – in some situations introspection can interfere with using our feelings as a reliable guide to what we should do.

And this has consequences in adulthood, where the notion of expertise can mean struggling to discern when introspection is the best strategy. The researchers who carried out this study suggest that the distorting effect of reason-giving is most likely to occur in situations where people aren’t experts – most of the students who took part in the study didn’t have a lot of experience of thinking or talking about art. When experts are asked to give reasons for their feelings, research has found that their feelings aren’t distorted in the same way – their intuitions and explicit reasoning are in sync.

You might also see the consequences of this regularly in your line of work. Everybody knows that the average business meeting will spend the most time discussing trivial things, an effect driven by the ease with which each member of the meeting can chip in about something as inconsequential as what colour to paint the bike sheds, or when to plan a meeting to discuss the conclusions of that meeting. When we’re discussing complex issues, it isn’t so easy to make a contribution. The danger, of course, is that in a world which relies on justification and measurement of everything, those things that are most easily justified and measured will get priority over those things which are, in fact, most justified and important.

This is my BBC Future column from last week. The original is here. For what it is worth, I think the headings it received there are very distracting from the real implications of this work. If you’ve got this far, you can work out why for yourself!

Does brain stimulation make you better at maths?

brainstimulation

The Headlines

Brain stimulation promises “long-lasting” maths boost

Mild electric shocks to brain may help students solve maths problems

Electrical brain boost can make you better at maths

What they actually did

Researchers led by Roi Cohen Kadosh at the University of Oxford trained people on two kinds of maths skills, rote learning simple arithmetic problems and practicing more varied calculations.

During this learning process they applied small and continually varying electrical currents to the scalp, above the temples. A control group wore the electrodes but didn’t receive any current. Compared to the controls, the people who practiced with the current turned on performed faster on the maths problems.

Even more amazing, when a subset of the participants were brought back six months later, those who had received the electrical treatment were still significantly faster, albeit only for the harder, more varied, calculations.

How plausible is it?

The particular technique these researchers used, called Transcranial Random Noise Stimulation (TRNS) is a recent invention, but the use of electrical stimulation to affect brain activity has a long history.

The brain is an electrochemical machine, so there’s every reason to think that electrical stimulation should affect its function. The part of the brain the researchers stimulated – the dorsolateral prefrontal cortex – is known to be involved in complex tasks like learning, decision making and calculation.

What’s amazing is that such a gross intervention as applying a current via electrodes, to such a large part of the brain, could have a specific (and beneficial) effect on mathematical ability.

Tom’s take

This is technically impressive work, done by highly capable researchers at well respected institutions and published in a prestigious journal. Still, there are a few warning signs that make me nervous about how reliable the result is.

  1. The key result showing the long-lasting nature of the effect is based on just six people who received the treatment (out of the 12 originally treated and the 12 controls). Even worse, the statistical test they rely on would have come up as “no effect” if they had done it the conventional way. While the result is based on such small numbers it has to remain as “promising” at best, rather than confirmed.

  2. The researchers recorded percentage correctly on the maths problems, as well as speed of responding, but they only discuss the speed of responding. The graphs of errors make it look like the people who got faster also make more mistakes, which doesn’t count as an improvement in my book. Why no combined analysis of speed and accuracy?

  3. We don’t know which part of the brain this effect is due to. Although they did record brain activity and show that it changes in the area they were interested in, the basic comparison is still “doing something to the brain” vs “doing nothing to the brain” (thanks to Vince Walsh for pointing this out). It is hard to make any solid conclusions on how this technique might be having an effect.

  4. There was no systematic check that participants were truly ignorant of which group they were in, although the researchers believe this to be the case. If participants knew when their brain was being stimulated then the change in performance could have been due to motivation or a desire to please rather than any specific manipulation of brain function.

Putting these worries aside, we’re not going to see this technique used in the classroom any time soon, even if it holds up. Suppose this technique is reliable, and we really can improve people’s basic maths skills with a bit of electrical stimulation we’d still hesitate to deploy it. Does it affect any other skills, perhaps taking resources away from them?

Competition is a basic principle of brain development, it isn’t implausible that there would be a cost to overclocking the brain like this. There might be all sort of minor side effects such as increased fatigue or poorer attention, which would mean that stimulation wasn’t just pure benefit. Or – also plausible – perhaps the more rapid learning of the basics would mean that skills which build on those basics would be harder to learn (sort of like screenburn for memories).

I’m not worried for the participants in this research, but I’d still want a lot more questions answered before I started setting electrical stimulation along with homework.

Read more

The original paper: Snowball, A., Tachtsidis, I., Popescu, T., Thompson, J., Delazer, M., Zamarian, L., Zhu, T., Cohen Kadosh, R. (2013). Long-Term Enhancement of Brain Function and Cognition Using Cognitive Training and Brain Stimulation. Current Biology. doi:10.1016/j.cub.2013.04.045Ed

Ed Yong on the dangers of neuroscience with small data sets.

Dorothy Bishop has collected some reactions to misleading headlines about ‘shocks’).

Tom Stafford does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.

Why you might prefer more pain

When is the best treatment for pain more pain? When you’re taking part in an experiment published by a Nobel prize winner and one of the leading lights in behavioural psychology, that is.

The psychologist in question is Daniel Kahneman; the experiment described by the self-explanatory title of: When More Pain Is Preferred to Less: Adding a Better End. In the study, Kahneman and colleagues looked at the pain participants felt by asking them to put their hands in ice-cold water twice (one trial for each hand). In one trial, the water was at 14C (59F) for 60 seconds. In the other trial the water was 14C for 60 seconds, but then rose slightly and gradually to about 15C by the end of an additional 30-second period.

Both trials were equally painful for the first sixty seconds, as indicated by a dial participants had to adjust to show how they were feeling. On average, participants’ discomfort started out at the low end of the pain scale and steadily increased. When people experienced an additional thirty seconds of slightly less cold water, discomfort ratings tended to level off or drop.

Next, the experimenters asked participants which kind of trial they would choose to repeat if they had to. You’ve guessed the answer: nearly 70% of participants chose to repeat the 90-second trial, even though it involved 30 extra seconds of pain. Participants also said that the longer trial was less painful overall, less cold, and easier to cope with. Some even reported that it took less time.

In case you think this is a freakish outcome of some artificial lab scenario, Kahneman saw a similar result when he interviewed patients who had undergone a colonoscopy examination – a procedure universally described as being decidedly unpleasant. Patients in Kahneman’s study group had colonoscopies that lasted from four to 69 minutes, but the duration of the procedure did not predict how they felt about it afterwards. Instead, it was the strength of their discomfort at its most intense, and the level of discomfort they felt towards the end of the procedure.

These studies support what Kahneman called the Peak-End rule – that our perceptions about an experience are determined by how it feels at its most intense, and how it feels at the end. The actual duration is irrelevant. It appears we don’t rationally calculate each moment of pleasure or pain using some kind of mental ledger. Instead, our memories filter how we feel about the things we’ve done and experienced, and our memories are defined more by the moments that seem most characteristic – the peaks and the finish – than by how we actually felt most of the time during the experience.

Kahneman wondered whether this finding meant that surgeons should extend painful operations needlessly to leave patients with happier memories, even though it would mean inflicting more pain overall. Others have asked whether this means that the most important thing about a holiday is that it includes some great times, rather than the length of time you are away for. (It certainly makes you think it would be worth doing if you could avoid the typical end to a holiday – queues, lumping heavy luggage around and jetlag.)

But I think the most important lesson of the Peak-End experiments is something else. Rather than saying that the duration isn’t important, the rule tells me that it is just as important to control how we mentally package our time. What defines an “experience” is somewhat arbitrary. If a weekend break where you forget everything can be as refreshing as a two-week holiday then maybe a secret to a happy life is to organise your time so it is broken up into as many distinct (and enjoyable) experiences as possible, rather than being just an unbroken succession of events which bleed into one another in memory.

All I need to do now is find the time to take a holiday and test my theory.

This is my BBC Future column, originally published last week. The original is here.

Did the eyes really stare down bicycle crime in Newcastle?

This is the first fortnightly column I’ll be writing for The Conversation, a creative commons news and opinion website that launched today. The site has been set up by a number of UK universities and bodies such as the Wellcome Trust, Nuffield Foundation and HEFCE, following the successful model of the Australian version of the site. Their plan is to unlock the massive amount of expertise held by UK academics and inject it into the public discourse. My plan is to give some critical commentary on headlines from the week's news which focus on neuroscience and psychology. If you've any headlines like you'd critiquing, let me know!


eyes

The headlines

Staring eyes ‘deter’ Newcastle University bike thieves

The poster that’s deterring bike thieves

The sign that cuts bike theft by 60%

The story

A picture of a large pair of eyes triggers feelings of surveillance in potential thieves, making them less likely to break the rules.

What they actually did

Researchers put signs with a large pair of eyes and the message “Cycle thieves: we are watching you” by the bike racks at Newcastle University.

They then monitored bike thefts for two years and found a 62% drop in thefts at locations with the signs. There was a 65% rise in the thefts from locations on campus without signs.

How plausible is it?

A bunch of studies have previously shown that subtle clues which suggest surveillance can alter moral behaviour. The classic example is the amount church-goers might contribute to the collection dish.

This research fits within the broad category of findings which show our decisions can be influenced by aspects of our environment, even those which shouldn’t logically affect them.

The signs are being trialled by Transport for London, and are a good example of the behavioural “nudges” promoted by the Cabinet Office’s (newly privatised) Behavioural Insight Unit. Policy makers love these kind of interventions because they are cheap. They aren’t necessarily the most effective way to change behaviour, but they have a neatness and “light touch” which means we’re going to keep hearing about this kind of policy.

Tom’s take

The problem with this study is that the control condition was not having any sign above bike racks – so we don’t know what it was about the anti-theft sign that had an effect. It could have been the eyes, or it could be message “we are watching you”. Previous research, cited in the study, suggests both elements have an effect.

The effect is obviously very strong for location, but it isn’t very strong in time. Thieves moved their thefts to nearby locations without signs – suggesting that any feelings of being watched didn’t linger. We should be careful about assuming that anything was working via the unconscious or irrational part of the mind.

If I were a bike thief and someone was kind enough to warn me that some bikes were being watched, and (by implication) others weren’t, I would rationally choose to do my thieving from an unwatched location.

Another plausible interpretation is that bike owners who were more conscious about security left their bikes at the signed locations. Such owners might have better locks and other security measures. Careless bike owners would ignore the signs, and so be more likely to park at unsigned locations and subsequently have their bikes nicked.

Read more

Nettle, D., Nott, K., & Bateson, M. (2012) “Cycle Thieves, We Are Watching You”: Impact of a Simple Signage Intervention against Bicycle Theft. PloS one, 7(12), e51738.

Tom Stafford does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.

The ‘unnamed feeling’ named ASMR

Here’s my BBC Future column from last week. It’s about the so-called Autonomous Sensory Meridian Response, which didn’t have a name until 2010 and I’d never heard of until 2012. Now, I’m finding out that it is surprisingly common. The original is here.

It’s a tightening at the back of the throat, or a tingling around your scalp, a chill that comes over you when you pay close attention to something, such as a person whispering instructions. It’s called the autonomous sensory meridian response, and until 2010 it didn’t exist.

I first heard about the autonomous sensory meridian response (ASMR) from British journalist Rhodri Marsden. He had become mesmerised by intentionally boring videos he found on YouTube, things like people explaining how to fold towels, running hair dryers or role-playing interactions with dentists. Millions of people were watching the videos, reportedly for the pleasurable sensations they generated.

Rhodri asked my opinion as a psychologist. Could this be a real thing? “Sure,” I said. If people say they feel it, it has to be real – in some form or another. The question is what kind of real is it? Are all these people experiencing the same thing? Is it learnt, or something we are born with? How common is it? Those are the kind of questions we’d ask as psychologists. But perhaps the most interesting thing about the ASMR is what happened to it before psychologists put their minds to it.

Presumably the feeling has existed for all of human history. Each person discovered the experience, treasured it or ignored it, and kept the feeling to themselves. That there wasn’t a name for it until 2010 suggests that most people who had this feeling hadn’t talked about it. It’s amazing that it got this far without getting a name. In scientific terms, it didn’t exist.

But then, of course, along came the 21st Century and, like they say, even if you’re one in a million there’s thousands of you on the internet. Now there’s websites, discussion forums, even a Wikipedia page. And a name. In fact, many names – “Attention Induced Euphoria”, “braingasm”, or “the unnamed feeling” are all competing labels that haven’t caught on in the same way as ASMR.

 

This points to something curious about the way we create knowledge, illustrated by a wonderful story about the scientific history of meteorites. Rocks falling from the sky were considered myths in Europe for centuries, even though stories of their fiery trails across the sky, and actual rocks, were widely, if irregularly reported. The problem was that the kind of people who saw meteorites and subsequently collected them tended to be the kind of people who worked outdoors – that is, farmers and other country folk. You can imagine the scholarly minds of the Renaissance didn’t weigh too heavily on their testimonies. Then in 1794 a meteorite shower fell on the town of Siena in Italy. Not only was Siena a town, it was a town with a university. The testimony of the townsfolk, including well-to-do church ministers and tourists, was impossible to deny and the reports written up in scholarly publications. Siena played a crucial part in the process of myth becoming fact.

Where early science required authorities and written evidence to turn myth into fact, ASRM shows that something more democratic can achieve the same result. Discussion among ordinary people on the internet provided validation that the unnamed feeling was a shared one. Suddenly many individuals who might have thought of themselves as unusual were able to recognise that they were a single group, with a common experience.

There is a blind spot in psychology for individual differences. ASMR has some similarities with synaesthesia (the merging of the senses where colours can have tastes, for example, or sounds produce visual effects). Both are extremes of normal sensation, which exist for some individuals but not others. For many years synaesthesia was a scientific backwater, a condition viewed as unproductive to research, perhaps just the product of people’s imagination rather than a real sensory phenomenon. This changed when techniques were developed that precisely measured the effects of synaesthesia, demonstrating that it was far more than people’s imagination. Now it has its own research community, with conferences and papers in scientific journals.

Perhaps ASMR will go the same way. Some people are certainly pushing for research into it. As far as I know there are no systematic scientific studies on ASMR. Since I was quoted in that newspaper article, I’ve been contacted regularly by people interested in the condition and wanting to know about research into it. When people hear that their unnamed feeling has a name they are drawn to find out more, they want to know the reality of the feeling, and to connect with others who have it. Something common to all of us wants to validate our inner experience by having it recognised by other people, and in particular by the authority of science. I can’t help – almost all I know about ASMR is in this column you are reading now. For now all we have is a name, but that’s progress.

Race perception isn’t automatic

Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.

For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.

When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.

Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.

Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.

Powerful force

The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).

Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.

It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.

So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).

Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.