Research Digest posts, #1: A self-fulfilling fallacy?

This week I will be blogging over at the BPS Research Digest. The Digest was written for over ten years by psychology-writer extraordinaire Christian Jarrett, and I’m one of a series of guest editors during the transition period to a new permanent editor.

My first piece is now up, and here is the opening:

Lady Luck is fickle, but many of us believe we can read her mood. A new study of one year’s worth of bets made via an online betting site shows that gamblers’ attempts to predict when their luck will turn has some unexpected consequences.

Read the rest over at the digest, I’ll post about the other stories I’ve written as they go up.

Why all babies love peekaboo

Peekaboo is a game played over the world, crossing language and cultural barriers. Why is it so universal? Perhaps because it’s such a powerful learning tool.

One of us hides our eyes and then slowly reveals them. This causes peals of laughter from a baby, which causes us to laugh in turn. Then we do it again. And again.

Peekaboo never gets old. Not only does my own infant daughter seem happy to do it for hours, but when I was young I played it with my mum (“you chuckled a lot!” she confirms by text message) and so on back through the generations. We are all born with unique personalities, in unique situations and with unique genes. So why is it that babies across the world are constantly rediscovering peekaboo for themselves?

Babies don’t read books, and they don’t know that many people, so the surprising durability and cultural universality of peekaboo is perhaps a clue that it taps into something fundamental in their minds. No mere habit or fashion, the game can help show us the foundations on which adult human thought is built.

An early theory of why babies enjoy peekaboo is that they are surprised when things come back after being out of sight. This may not sound like a good basis for laughs to you or I, with our adult brains, but to appreciate the joke you have to realise that for a baby, nothing is given. They are born into a buzzing confusion, and gradually have to learn to make sense of what is happening around them. You know that when you hear my voice, I’m usually not far behind, or that when a ball rolls behind a sofa it still exists, but think for a moment how you came by this certainty.

The Swiss developmental psychologist Jean Piaget called this principle ‘object permanence’ and suggested that babies spent the first two years of their lives working it out. And of course those two years are prime peekaboo time. Looked at this way, the game isn’t just a joke, but helps babies test and re-test a fundamental principle of existence: that things stick around even when you can’t see them.

Maybe evolution fixed it so that babies enjoy peekaboo for its own sake, since it proved useful in cognitive development, but I doubt it. Something deeper than mere education is going on.

Surprise element

Peekaboo uses the fundamental structure of all good jokes – surprise, balanced with expectation. Researchers Gerrod Parrott and Henry Gleitman showed this in tests involving a group of six-, seven- and eight-month-olds which sound like more fun than a psychology experiment should be. Most of the time the peekaboo game proceeded normally, however on occasion the adult hid and reappeared as a different adult, or hid and reappeared in a different location. Videos of the infants were rated by independent observers for how much the babies smiled and laughed.

On these “trick trials” the babies smiled and laughed less, even though the outcome was more surprising. What’s more, the difference between their enjoyment of normal peekaboo and trick-peekaboo increased with age (with the eight-month-olds enjoying the trick trials least). The researchers’ interpretation for this is that the game relies on being able to predict the outcome. As the babies get older their prediction gets stronger, so the discrepancy with what actually happens gets larger – they find it less and less funny.

The final secret to the enduring popularity of peekaboo is that it isn’t actually a single game. As the baby gets older their carer lets the game adapt to the babies’ new abilities, allowing both adult and infant to enjoy a similar game but done in different ways. The earliest version of peekaboo is simple looming, where the carer announces they are coming with their voice before bringing their face into close focus for the baby. As the baby gets older they can enjoy the adult hiding and reappearing, but after a year or so they can graduate to take control by hiding and reappearing themselves.

In this way peekaboo can keep giving, allowing a perfect balance of what a developing baby knows about the world, what they are able to control and what they are still surprised by. Thankfully we adults enjoy their laughter so much that the repetition does nothing to stop us enjoying endless rounds of the game ourselves.

This is my BBC Future column from last week. The original is here

Does the unconscious know when you’re being lied to?

The headlines
BBC: Truth or lie – trust your instinct, says research

British Psychological Society: Our subconscious mind may detect liars

Daily Mail: Why you SHOULD go with your gut: Instinct is better at detecting lies than our conscious mind

The Story
Researchers at the University of California, Berkeley, have shown that we have the ability to unconsciously detect lies, even when we’re not able to explicitly say who is lying and who is telling the truth.

What they actually did
The team, led by Leanne ten Brinke of the Haas School of Business, created a set of videos using a “mock high-stakes crime scenario”. This involved asking 12 volunteers to be filmed while being interrogated about whether they had taken US$100 dollars from the testing room. Half the volunteers had been asked to take the $100, and had been told they could keep it if they persuaded the experimenter that they hadn’t. In this way the researchers generated videos of both sincere denials and people who were trying hard to deceive.

They then showed these videos to experimental participants who had to judge if the people in the videos were lying or telling the truth. As well as this measure of conscious lie detection, the participants also completed a task designed to measure their automatic feelings towards the people in the videos.

In experiment one this was a so-called Implicit Association Test which works by comparing the ease with which the participants associated the faces of the people in the videos with the words TRUTH or LIE. Experiment two was a priming test, where the faces of the people in the videos changed the speed at which people then made judgements about words they were then given related to truth-telling and deception.

The results of the study showed that people were no better than chance in their explicit judgements of who was telling the truth and who was lying, but the measurements of their other behaviours showed significant differences. Specifically, for people who were actually lying, observers were slower to associate their faces with the word TRUTH or quicker to associate it with the word LIE. The second experiment showed that after seeing someone who was actually telling the truth people made faster judgements about words related to truth-telling and slower judgements about words related to deception (and vice versa after a video of someone who was actually lying).

How plausible is this?
The result that people aren’t good at detecting lies is very well established. Even professionals, such as police officers, perform poorly when formally tested on their ability to discriminate lying from truth telling.

It’s also very plausible that the way in which you measure someone’s judgement can reveal different things. For example, people are in general notoriously bad at reasoning about risk when they are asked to give estimates verbally, but measurements of behaviour show that we are able to make very accurate estimates of risk in the right circumstances.

It fits with other results in psychological research which show that over thinking certain judgements can reduce their accuracy

Tom’s take
The researchers are trying to have it both ways. The surprise of the result rests on the fact that people don’t score well when asked to make a simple truth vs lie judgement, but their behavioural measures suggest people would be able to make this judgement if asked differently. Claiming the unconscious mind knows what the conscious mind doesn’t is going too far – it could be that the simple truth vs lie judgement isn’t sensitive enough, or is subject to some bias (participants afraid of being wrong for example).

Alternatively, it could be that the researchers’ measures of the unconscious are only sensitive to one aspect of the unconscious – and it happens to be an aspect that can distinguish lies from an honest report. How much can we infer from the unconscious mind as a whole from the behavioural measures?

When reports of this study say “trust your instincts” they ignore the fact that the participants in this study did have the opportunity to trust their instincts – they made a judgement of whether individuals were lying or not, presumably following the combination of all the instincts they had, including those that produced the unconscious measures the researchers tested. Despite this, they couldn’t guess correctly if someone was lying or not.

If the unconscious is anything it will be made up of all the automatic processes that run under the surface of our conscious minds. For any particular judgement – in this case detecting truth telling – some process may be accurate at above chance levels, but that doesn’t mean the unconscious mind as a whole knows who is lying or not.

It doesn’t even mean there is such as thing as the unconscious mind, just that there are aspects to what we think that aren’t reported by people if you ask them directly. We can’t say that people “knew” who was lying, when the evidence shows that they didn’t or couldn’t use this information to make correct judgements.

Read more
The original paper: Some evidence for unconscious lie detection”

The data and stimuli for this experiment are freely available – a wonderful example of “open science.”

A short piece I wrote about how articulating your feelings can get in the way of realising them.

The Conversation

This article was originally published on The Conversation.
Read the original article.

What’s the evidence for the power of reason to change minds?

Last month I proposed an article for Contributoria, titled What’s the evidence on using rational argument to change people’s minds?. Unfortunately, I had such fun reading about the topic that I missed the end-of-month deadline and now need to get backers for my proposal again.

So, here’s something from my proposal, please consider backing it so I can put my research to good use:

Is it true that “you can’t tell anybody anything”? From pub arguments to ideology-driven party political disputes it can sometimes seem like people have their minds all made up, that there’s no point trying to persuade anybody of anything. Popular psychology books reinforce the idea that we’re emotional, irrational creatures (Dan Ariely “Predictably irrational”, David McRaney “You Are Not So Smart”). This piece will be 3000 words on the evidence from psychological science about persuasion by rational argument.

All you need to do to back proposals, currently, is sign up for the site. You can see all current proposals here. Written articles are Creative Commons licensed.

Back the proposal: What’s the evidence on using rational argument to change people’s minds?

Full disclosure: I’ll be paid by Contributoria if the proposal is backed

Update:: Backed! Thanks all! Watch this space for the finished article. I promise I’ll make the deadline this time

What’s the evidence on using rational argument to change people’s minds?

Contributoria is an experiment in community funded, collaborative journalism. What that means is that you can propose an article you’d like to write, and back proposals by others that you’d like to see written. There’s an article I’d like to write: What’s the evidence on using rational argument to change people’s minds?. Here’s something from the proposal:

Is it true that “you can’t tell anybody anything”? From pub arguments to ideology-driven party political disputes it can sometimes seem like people have their minds all made up, that there’s no point trying to persuade anybody of anything. Popular psychology books reinforce the idea that we’re emotional, irrational creatures (Dan Ariely “Predictably irrational”, David McRaney “You Are Not So Smart”). This piece will be 2000 words on the evidence from psychological science about persuasion by rational argument.

If the proposal is backed it will give me a chance to look at the evidence on things like the , on whether political extremism is supported by an illusion of explanatory depth (and how that can be corrected), and on how we treat all those social psychology priming experiments which suggest that our opinions on things can be pushed about by irrelevant factors such as the weight of a clipboard we’re holding.

All you need to do to back proposals, currently, is sign up for the site. You can see all current proposals here. Written articles are Creative Commons licensed.

Back the proposal: What’s the evidence on using rational argument to change people’s minds?

Full disclosure: I’ll be paid by Contributoria if the proposal is backed

Update: Backed! That was quick! Much thanks mindhacks.com readers! I’d better get reading and writing now…

noob 2 l33t: now with graphs

Myself and Mike Dewar have just had a paper published in the journal Psychological Science. In it we present an analysis of what affects how fast people learn, using data from over 850,000 people who played an online game called Axon (designed by our friends Preloaded. This is from the abstract:

In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning.

The paper is behind a paywall for the next year, unfortunately, but you can find a pre-print, as well as all the raw data and analysis code (written in Python) in the github repo. I wrote something on my academic blog about the methods and why we wanted to make this an example of open science.

Links: The paper: Tracing the Trajectory of Skill Learning With a Very Large Sample of Online Game Players
And the data & code.

Thanks to @phooky for suggesting an alternative title for the paper, which I’ve used to title this post

Why Christmas rituals make tasty food

All of us carry out rituals in our daily lives, whether it is shaking hands or clinking glasses before we drink. At this time of year, the performance of customs and traditions is widespread – from sharing crackers, to pulling the wishbone on the turkey and lighting the Christmas pudding.

These rituals might seem like light-hearted traditions, but I’m going to try and persuade you that they are echoes of our evolutionary history, something which can tell us about how humans came to relate to each other before we had language. And the story starts by exploring how rituals can make our food much tastier.

In recent years, studies have suggested that performing small rituals can influence people’s enjoyment of what they eat. In one experiment, Kathleen Vohs from the University of Minnesota and colleagues explored how ritual affected people’s experience of eating a chocolate bar. Half of the people in the study were instructed to relax for a moment and then eat the chocolate bar as they normally would. The other half were given a simple ritual to perform, which involved breaking the chocolate bar in half while it was still inside its wrapper, and then unwrapping each half and eating it in turn.

Something about carefully following these instructions before eating the chocolate bar had a dramatic effect. People who had focused on the ritual said they enjoyed eating the chocolate more, rating the experience 15% higher than the control group. They also spent longer eating the chocolate, savouring the flavour for 50% longer than the control group. Perhaps most persuasively, they also said they would pay almost twice as much for such a chocolate.

This experiment shows that a small act can significantly increase the value we get from a simple food experience. Vohs and colleagues went on to test the next obvious question – how exactly do rituals work this magic? Repeating the experiment, they asked participants to describe and rate the act of eating the chocolate bar. Was it fun? Boring? Interesting? This seemed to be a critical variable – those participants who were made to perform the ritual rated the experience as more fun, less boring and more interesting. Statistical analysis showed that this was the reason they enjoyed the chocolate more, and were more willing to pay extra.

So, rituals appear to make people pay attention to what they are doing, allowing them to concentrate their minds on the positives of a simple pleasure. But could there be more to rituals? Given that they appear in many realms of life that have nothing to do with food –from religious services to presidential inaugurations – could their performance have deeper roots in our evolutionary history? Attempting to answer the question takes us beyond the research I’ve been discussing so far and into the complex and controversial debate about the evolution of human nature.

In his book, The Symbolic Species, Terrance Deacon claims that ritual played a special role in human evolution, in particular, at the transition point where we began to acquire the building blocks of language. Deacon’s argument is that the very first “symbols” we used to communicate, the things that became the roots of human language, can’t have been anything like the words we use so easily and thoughtlessly today. He argues that these first symbols would have been made up of extended, effortful and complex sequences of behaviours performed in a group – in other words, rituals. These symbols were needed because of the way early humans arranged their family groups and, in particular, shared the products of hunting. Early humans needed a way to tell each other who had what responsibilities and which privileges; who was part of the family, and who could share the food, for instance. These ideas are particularly hard to refer to by pointing. Rituals, says Deacon, were the evolutionary answer to the conundrum of connecting human groups and checking they had a shared understanding of how the group worked.

If you buy this evolutionary story – and plenty don’t – it gives you a way to understand why exactly our minds might have a weakness for ritual. A small ritual makes food more enjoyable, but why does it have that effect? Deacon’s answer is that our love of rituals evolved with our need to share food. Early humans who enjoyed rituals had more offspring. I speculate that an easy shortcut for evolution to find to make us enjoy rituals is by connecting our minds to that the rituals make the food more enjoyable.

So, for those sitting down with family this holiday, don’t skip the traditional rituals – sing the songs, pull the crackers, clink the glasses and listen to Uncle Vinnie repeat his funny anecdotes for the hundredth time. The rituals will help you enjoy the food more, and carry with them an echo of our long history as a species, and all the feasts the tribe shared before there even was Christmas.

This is my latest column for BBC Future. You can see the original here. Merry Christmas y’all!

How sleep makes your mind more creative

It’s a tried and tested technique used by writers and poets, but can psychology explain why first moments after waking can be among our most imaginative?

It is 6.06am and I’m typing this in my pyjamas. I awoke at 6.04am, walked from the bedroom to the study, switched on my computer and got to work immediately. This is unusual behaviour for me. However, it’s a tried and tested technique for enhancing creativity, long used by writers, poets and others, including the inventor Benjamin Franklin. And psychology research appears to back this up, providing an explanation for why we might be at our most creative when our minds are still emerging from the realm of sleep.

The best evidence we have of our mental state when we’re asleep is that strange phenomenon called dreaming. Much remains unknown about dreams, but one thing that is certain is that they are weird. Also listening to other people’s dreams can be deadly boring. They go on and on about how they were on a train, but it wasn’t a train, it was a dinner party, and their brother was there, as well as a girl they haven’t spoken to since they were nine, and… yawn. To the dreamer this all seems very important and somehow connected. To the rest of us it sounds like nonsense, and tedious nonsense at that.

Yet these bizarre monologues do highlight an interesting aspect of the dream world: the creation of connections between things that didn’t seem connected before. When you think about it, this isn’t too unlike a description of what creative people do in their work – connecting ideas and concepts that nobody thought to connect before in a way that appears to make sense.

No wonder some people value the immediate, post-sleep, dreamlike mental state – known as sleep inertia or the hypnopompic state – so highly. It allows them to infuse their waking, directed thoughts with a dusting of dreamworld magic. Later in the day, waking consciousness assumes complete control, which is a good thing as it allows us to go about our day evaluating situations, making plans, pursuing goals and dealing rationally with the world. Life would be challenging indeed if we were constantly hallucinating, believing the impossible or losing sense of what we were doing like we do when we’re dreaming. But perhaps the rational grip of daytime consciousness can at times be too strong, especially if your work could benefit from the feckless, distractible, inconsistent, manic, but sometimes inspired nature of its rebellious sleepy twin.

Scientific methods – by necessity methodical and precise – might not seem the best of tools for investigating sleep consciousness. Yet in 2007 Matthew Walker, now of the University of California at Berkeley, and colleagues carried out a study that helps illustrate the power of sleep to foster unusual connections, or “remote associates” as psychologists call them.

Under the inference

Subjects were presented with pairs of six abstract patterns A, B, C, D, E and F. Through trial and error they were taught the basics of a hierarchy, which dictated they should select A over B, B over C, C over D, D over E, and E over F. The researchers called these the “premise pairs”. While participants learnt these during their training period, they were not explicitly taught that because A was better than B, and B better than C, that they should infer A to be better than C, for example. This hidden order implied relationships, described by Walker as “inference pairs”, were designed to mimic the remote associates that drive creativity.

Participants who were tested 20 minutes after training got 90% of premise pairs but only around 50% of inference pairs right – the same fraction you or I would get if we went into the task without any training and just guessed.

Those tested 12 hours after training again got 90% for the premise pairs, but 75% of inference pairs, showing the extra time had allowed the nature of the connections and hidden order to become clearer in their minds.

But the real success of the experiment was a contrast in the performances of one group trained in the morning and then re-tested 12 hours later in the evening, and another group trained in the evening and brought back for testing the following morning after having slept. Both did equally well in tests of the premise pairs. The researchers defined inferences that required understanding of two premise relationships as easy, and those that required three or more as hard. So, for example, A being better than C, was labelled as easy because it required participants to remember that A was better than B and B was better than C. However understanding that A was better than D meant recalling A was better than B, B better than C, and C better than D, and so was defined as hard.

When it came to the harder inferences, people who had a night’s sleep between training and testing got a startling 93% correct, whereas those who’d been busy all day only got 70%.

The experiment illustrates that combining what we know to generate new insights requires time, something that many might have guessed. Perhaps more revealingly it also shows the power of sleep in building remote associations. Making the links between pieces of information that our daytime rational minds see as separate seems to be easiest when we’re offline, drifting through the dreamworld.

It is this function of sleep that might also explain why those first moments upon waking can be among our most creative. Dreams may seem weird, but just because they don’t make sense to your rational waking consciousness doesn’t make them purposeless. I was at my keyboard two minutes after waking up in an effort to harness some dreamworld creativity and help me write this column – memories of dreams involving trying to rob a bank with my old chemistry teacher, and playing tennis with a racket made of spaghetti, still tinging the edges of my consciousness.

This is my BBC Future column from last week. The original is here. I had the idea for the column while drinking coffee with Helen Mort. Caffeine consumption being, of course, another favourite way to encourage creativity!

Are men better wired to read maps or is it a tired cliché?

By Tom Stafford

The headlines

The Guardian: Male and female brains wired differently, scans reveal

The Atlantic: Male and female brains really are built differently

The Independent: The hardwired difference between male and female brains could explain why men are ‘better at map reading

The Story

An analysis of 949 brain scans shows significant sex differences in the connections between different brain areas.

What they actually did

Researchers from Philadelphia took data from 949 brain scans and divided them into three age groups and by gender. They then analysed the connections between 95 separate divisions of each brain using a technique called Diffusion Tensor Imaging.

With this data they constructed “connectome” maps, which show the network of the strength of connection between those brain regions.

Statistical testing of this showed significant differences between these networks according to sex – the average men’s network was more connected within each side of the brain, and the average women’s network was better connected between the two hemispheres. These differences emerged most strongly after the age of 13 (so weren’t as striking for the youngest group they tested).

How plausible is this?

Everybody knows that men are women have some biological differences – different sizes of brains and different hormones. It wouldn’t be too surprising if there were some neurological differences too. The thing is, we also know that we treat men and women differently from the moment they’re born, in almost all areas of life. Brains respond to the demands we make of them, and men and women have different demands placed on them.

Although a study of brain scans has an air of biological purity, it doesn’t escape from the reality that the people having their brains scanned are the product of social and cultural forces as well as biological ones.

The research itself is a technical tour-de-force which really needs a specialist to properly critique. I am not that specialist. But a few things seem odd about it: they report finding significant differences between the sexes, but don’t show the statistics that allow the reader to evaluate the size of any sex difference against other factors such as age or individual variability. This matters because you can have a statistically significant difference which isn’t practically meaningful. Relative size of effect might be very important.

For example, a significant sex difference could be tiny compared to the differences between people of different ages, or compared to the normal differences between individuals. The question of age differences is also relevant because we know the brain continues to develop after the oldest age tested in the study (22 years).

Any sex difference could plausibly be due to difference in the time-course of development between men and women. But, in general, it isn’t the technical details which I am equipped to critique. It’s a fair assumption to believe what the researchers have found, so let’s turn instead to how it is being interpreted.

Tom’s take

One of the authors of this research, as reported in The Guardian, said “the greatest surprise was how much the findings supported old stereotypes”. That, for me, should be a warning sign. Time and time again we find, as we see here, that highly technical and advanced neuroscience is used to support tired old generalisations.

Here, the research assumes the difference it seeks to prove. The data is analysed for sex differences with other categories receiving less or no attention (age, education, training and so on). From this biased lens on the data, a story about fundamental differences is also told. Part of our psychological make-up seems to be to want to assign essences to things – and differences between genders is a prime example of something people want to be true.

Even if we assume this research is reliable it doesn’t tell us about actual psychological differences between men and women. The brain scan doesn’t tell us about behaviour (and, indeed, most of us manage to behave in very similar ways despite large differences in brain structure and connectivity). Bizarrely, the authors seem also to want to use their analysis to support a myth about left brain vs right brain thinking. The “rational” left brain vs the intuitive’ right brain is a distinction that even Michael Gazzaniga, one of the founding fathers of “split brain” studies doesn’t believe any more.

Perhaps more importantly, analysis of how men and women are doesn’t tell you how men and women could be if brought up differently.

When the headlines talk about “hardwiring” and “proof that men and women are different” we can see the role this research is playing in cementing an assumption that people have already made. In fact, the data is silent on how men and women’s brains would be connected if society put different expectations on them.

Given the surprising ways in which brains do adapt to different experiences, it is completely plausible that even these significant “biological” differences could be due to cultural factors.

And even reliable differences between men and women can be reversed by psychological manipulations, which suggests that any underling biological differences isn’t as fundamental as researchers like to claim.

As Shakespeare has Ophelia say in Hamlet: “Lord, we know what we are, but know not what we may be.”

Read more

The original paper: Sex differences in the structural connectome of the human brain

Sophie Scott of UCL has some technical queries about the research – one possibility is that movements made during the scanning could have been different between the sexes and generated the apparent differences in the resulting connectome networks.

Another large study, cited by this current paper, found no differences according to sex.

Cordelia Fine’s book, Delusions of gender: how our minds, society, and neuro-sexism create difference provides essential context for looking at this kind of research.

UPDATE: Cordelia Fine provides her own critique of the paper

Tom Stafford does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.

Why the stupid think they’re smart

Psychologists have shown humans are poor judges of their own abilities, from sense of humour to grammar. Those worst at it are the worst judges of all.

You’re pretty smart right? Clever, and funny too. Of course you are, just like me. But wouldn’t it be terrible if we were mistaken? Psychologists have shown that we are more likely to be blind to our own failings than perhaps we realise. This could explain why some incompetent people are so annoying, and also inject a healthy dose of humility into our own sense of self-regard.

In 1999, Justin Kruger and David Dunning, from Cornell University, New York, tested whether people who lack the skills or abilities for something are also more likely to lack awareness of their lack of ability. At the start of their research paper they cite a Pittsburgh bank robber called McArthur Wheeler as an example, who was arrested in 1995 shortly after robbing two banks in broad daylight without wearing a mask or any other kind of disguise. When police showed him the security camera footage, he protested “But I wore the juice”. The hapless criminal believed that if you rubbed your face with lemon juice you would be invisible to security cameras.

Kruger and Dunning were interested in testing another kind of laughing matter. They asked professional comedians to rate 30 jokes for funniness. Then, 65 undergraduates were asked to rate the jokes too, and then ranked according to how well their judgements matched those of the professionals. They were also asked how well they thought they had done compared to the average person.

As you might expect, most people thought their ability to tell what was funny was above average. The results were, however, most interesting when split according to how well participants performed. Those slightly above average in their ability to rate jokes were highly accurate in their self-assessment, while those who actually did the best tended to think they were only slightly above average. Participants who were least able to judge what was funny (at least according to the professional comics) were also least able to accurately assess their own ability.

This finding was not a quirk of trying to measure subjective sense of humour. The researchers repeated the experiment, only this time with tests of logical reasoning and grammar. These disciplines have defined answers, and in each case they found the same pattern: those people who performed the worst were also the worst in estimating their own aptitude. In all three studies, those whose performance put them in the lowest quarter massively overestimated their own abilities by rating themselves as above average.

It didn’t even help the poor performers to be given a benchmark. In a later study, the most incompetent participants still failed to realise they were bottom of the pack even when given feedback on the performance of others.

Kruger and Dunning’s interpretation is that accurately assessing skill level relies on some of the same core abilities as actually performing that skill, so the least competent suffer a double deficit. Not only are they incompetent, but they lack the mental tools to judge their own incompetence.

In a key final test, Kruger and Dunning trained a group of poor performers in logical reasoning tasks. This improved participants’ self-assessments, suggesting that ability levels really did influence self-awareness.

Other research has shown that this “unskilled and unaware of it” effect holds in real-life situations, not just in abstract laboratory tests. For example, hunters who know the least about firearms also have the most inaccurate view of their firearm knowledge, and doctors with the worst patient-interviewing skills are the least likely to recognise their inadequacies.

What has become known as the Dunning-Kruger effect is an example of what psychologists call metacognition – thinking about thinking. It’s also something that should give us all pause for thought. The effect might just explain the apparently baffling self belief of some of your friends and colleagues. But before you start getting too smug, just remember one thing. As unlikely as you might think it is, you too could be walking around blissfully ignorant of your ignorance.

This is my BBC Future column from last week. The original is here.

Do violent video games make teens ‘eat and cheat’ more?

By Tom Stafford, University of Sheffield

The Headlines

Business Standard: Violent video games make teens eat more, cheat more

Scienceblog.com: Teens ‘Eat more, cheat more’ after playing violent video games

The Times of India: Violent video games make teens cheat more

The story

Playing the violent video game Grand Theft Auto made teenagers more aggressive, more dishonest and lowered their self control.

What they actually did

172 Italian high school students (age 13-19 years old), about half male and half female, took part in an experiment in which they first played a video game for 35 minutes. Half played a non-violent pinball or golf game, and half played one of the ultra-violent Grand Theft Auto games.

During the game they had the opportunity to eat M&M’s freely from a bowl (the amount they scoffed provided a measure of self-control), and after the game they had the opportunity take a quiz to earn raffle tickets (and the opportunity to cheat on the quiz, which provided a measure of dishonesty). They also played a game during which they could deliver unpleasant noises to a fellow player as punishments (which was used to measure of aggression).

Analysis of the results showed that those who played the violent video game had lower scores when it came to the self-control measure, cheated more and were more aggressive. What’s more, these effects were most pronounced for those who had high scores on a scale of “moral disengagement” – which measures how loose your moral thinking is. In other words, if you don’t think too hard about right and wrong, you score highly.

How plausible is this?

This is a well designed study, which uses random allocation to the two groups to try to properly assess causation (does the violent video game cause immoral behaviour?).

The choice of control condition was reasonable (the other video games were tested and found to be just as enjoyed by the participants), and the measures are all reasonable proxies for the things we are interested in. Obviously you can’t tell if weakened self-control for eating chocolate will mean weakened self-control for more important behaviour, but it’s a nice specific measure which is practical in an experiment and which just might connect to the wider concept.

The number of participants is also large enough that we can give the researchers credit for putting in the effort. Getting about 85 people in each group should give a minimum of statistical power, which means any effects might be reliable.

As an experimental psychologist, there’s lots for me to like about this study. The only obvious potential problem that I can see is that of demand effects, subtle cues that can make participants aware of what the experimenter expects to find or how they should behave. The participants were told they were in a study which looked at the effects of video games, so it isn’t impossible that some element of their behaviour was playing up to what they reasonably guessed the researchers were looking for and it doesn’t look like the researchers checked if this might be the case.

Tom’s take

You can’t leap to conclusions from a single study, of course – even a well designed one. We should bear in mind the history of moral panics around new technology and media. Today we’re concerned with violent video games, 50 years ago it was comic books and jazz. At least jazz is no longer corrupting young people.

Is our worry about violent video games just another page in the history of adults worrying about what young people are up to? That’s certainly a factor, but unlike jazz, it does seem psychologically plausible that a game where you enjoy reckless killing and larceny might encourage players to be self-indulgent and nasty.

Reviews suggest violent media may be a risk factor for violent behaviour, just like cigarette smoke is a risk factor for cancer. Most people who play video games won’t commit violent acts, just like most people who passive smoke won’t get cancer.

The problem is other research reviews into impact of violent entertainment on our behaviour suggest the evidence for a negative effect is weak and contradictory.

Video games are a specific example of the general topic of if and how media affect our behaviour. Obviously, we are more than complete zombies, helpless to resist every suggestion or example, but we’re also less than completely independent creatures, immune to the influence of other people and all forms of entertainment. Where the balance lies between these extremes is controversial.

For now, I’m going to keep an open mind, but as a personal choice I’m probably not going to get the kids GTA for Christmas.

Read more

The original paper: Interactive Effect of Moral Disengagement and Violent Video Games on Self-Control, Cheating, and Aggression

@PeteEtchells provides a good summary of the scientific (lack of) consensus: What is the link between violent video games and aggression?

Commentary by one researcher on the problems in the field of video game research: The Challenges of Accurate Reporting on Video Game Research

And a contrary research report: A decade long study of over 11,000 children finds no negative impact of video games

Tom Stafford does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article, or other columns in the series

How muggers size up your walk

The way people move can influence the likelihood of an attack by a stranger. The good news, though, is that altering this can reduce the chances of being targeted.

How you move gives a lot away. Maybe too much, if the wrong person is watching. We think, for instance, that the way people walk can influence the likelihood of an attack by a stranger. But we also think that their walking style can be altered to reduce the chances of being targeted.

A small number of criminals commit most of the crimes, and the crimes they commit are spread unevenly over the population: some unfortunate individuals seem to be picked out repeatedly by those intent on violent assault. Back in the 1980s, two psychologists from New York, Betty Grayson and Morris Stein, set out to find out what criminals look for in potential victims. They filmed short clips of members of the public walking along New York’s streets, and then took those clips to a large East Coast prison. They showed the tapes to 53 violent inmates with convictions for crimes on strangers, ranging from assault to murder, and asked them how easy each person would be to attack.

The prisoners made very different judgements about these notional victims. Some were consistently rated as easier to attack, as an “easy rip-off”. There were some expected differences, in that women were rated as easier to attack than men, on average, and older people as easier targets than the young. But even among those you’d expect to be least easy to assault, the subgroup of young men, there were some individuals who over half the prisoners rated at the top end of the “ease of assault” scale (a 1, 2 or 3, on the 10 point scale).

The researchers then asked professional dancers to analyse the clips using a system called Laban movement analysis – a system used by dancers, actors and others to describe and record human movement in detail. They rated the movements of people identified as victims as subtly less coordinated than those of non-victims.

Although Professors Grayson and Stein identified movement as the critical variable in criminals’ predatory decisions, their study had the obvious flaw that their films contained lots of other potentially relevant information: the clothes the people wore, for example, or the way they held their heads. Two decades later, a research group led by Lucy Johnston of the University of Canterbury, in New Zealand, performed a more robust test of the idea.

The group used a technique called the point light walker. This is a video recording of a person made by attaching lights or reflective markers to their joints while they wear a black body suit. When played back you can see pure movement shown in the way their joints move, without being able to see any of their features or even the limbs that connect their joints.

Research with point light walkers has shown that we can read characteristics from joint motion, such as gender or mood. This makes sense, if you think for a moment of times you’ve recognised a person from a distance, long before you were able to make out their face. Using this technique, the researchers showed that even when all other information was removed, some individuals still get picked out as more likely to be victims of assault than others, meaning these judgements must be based on how they move.

Walk this way

But the most impressive part of Johnston’s investigations came next, when she asked whether it was possible to change the way we walk so as to appear less vulnerable. A first group of volunteers were filmed walking before and after doing a short self defence course. Using the point-light technique, their walking styles were rated by volunteers (not prisoners) for vulnerability. Perhaps surprisingly, the self-defence training didn’t affect the walkers’ ratings.

In a second experiment, recruits were given training in how to walk, specifically focusing on the aspects which the researchers knew affected how vulnerable they appeared: factors affecting the synchrony and energy of their movement. This led to a significant drop in all the recruits’ vulnerability ratings, which was still in place when they were re-tested a month later.

There is school of thought that the brain only exists to control movement. So perhaps we shouldn’t be surprised that how we move can give a lot away. It’s also not surprising that other people are able to read our movements, whether it is in judging whether we will win a music competition, or whether we are bluffing at poker. You see how someone moves before you can see their expression, hear what they are saying or smell them. Movements are the first signs of others’ thoughts, so we’ve evolved to be good (and quick) at reading them.

The point light walker research a great example of a research journey that goes from a statistical observation, through street-level investigations and the use of complex lab techniques, and then applies the hard won knowledge for good: showing how the vulnerable can take steps to reduce their appearance of vulnerability.

My BBC Future column from Tuesday. The original is here. Thanks to Lucy Johnston for answering some of my queries. Sadly, and suprisingly to me, she’s no longer pursuing this line of research.

Does studying economics make you more selfish?

When economics students learn about what makes fellow humans tick it affects the way they treat others. Not necessarily in a good way, as Tom Stafford explains.

Studying human behaviour can be like a dog trying to catch its own tail. As we learn more about ourselves, our new beliefs change how we behave. Research on economics students showed this in action: textbooks describing facts and theories about human behaviour can affect the people studying them.

Economic models are often based on an imaginary character called the rational actor, who, with no messy and complex inner world, relentlessly pursues a set of desires ranked according to the costs and benefits. Rational actors help create simple models of economies and societies. According to rational choice theory, some of the predictions governing these hypothetical worlds are common sense: people should prefer more to less, firms should only do things that make a profit and, if the price is right, you should be prepared to give up anything you own.

Another tool used to help us understand our motivations and actions is game theory, which examines how you make choices when their outcomes are affected by the choices of others. To determine which of a number of options to go for, you need a theory about what the other person will do (and your theory needs to encompass the other person’s theory about what you will do, and so on). Rational actor theory says other players in the game all want the best outcome for themselves, and that they will assume the same about you.

The most famous game in game theory is the “prisoner’s dilemma”, in which you are one of a pair of criminals arrested and held in separate cells. The police make you this offer: you can inform on your partner, in which case you either get off scot free (if your partner keeps quiet), or you both get a few years in prison (if he informs on you too). Alternatively you can keep quiet, in which case you either get a few years (if your partner also keeps quiet), or you get a long sentence (if he informs on you, leading to him getting off scot free). Your partner, of course, faces exactly the same choice.

If you’re a rational actor, it’s an easy decision. You should inform on your partner in crime because if he keeps quiet, you go free, and if he informs on you, both of you go to prison, but the sentence will be either the same length or shorter than if you keep quiet.

Weirdly, and thankfully, this isn’t what happens if you ask real people to play the prisoner’s dilemma. Around the world, in most societies, most people maintain the criminals’ pact of silence. The exceptions who opt to act solely in their own interests are known in economics as “free riders” – individuals who take benefits without paying costs.

Self(ish)-selecting group

The prisoner’s dilemma is a theoretical tool, but there are plenty of parallel choices – and free riders – in the real world. People who are always late for appointments with others don’t have to hurry or wait for others. Some use roads and hospitals without paying their taxes. There are lots of interesting reasons why most of us turn up on time and don’t avoid paying taxes, even though these might be the selfish “rational” choices according to most economic models.

Crucially, rational actor theory appears more useful for predicting the actions of certain groups of people. One group who have been found to free ride more than others in repeated studies is people who have studied economics. In a study published in 1993, Robert Frank and colleagues from Cornell University, in Ithaca, New York State, tested this idea with a version of the prisoner’s dilemma game. Economics students “informed on” other players 60% of the time, while those studying other subjects did so 39% of the time. Men have previously been found to be more self-interested in such tests, and more men study economics than women. However even after controlling for this sex difference, Frank found economics students were 17% more likely to take the selfish route when playing the prisoner’s dilemma.

In good news for educators everywhere, the team found that the longer students had been at university, the higher their rates of cooperation. In other words, higher education (or simple growing up), seemed to make people more likely to put their faith in human co-operation. The economists again proved to be the exception. For them extra years of study did nothing to undermine their selfish rationality.

Frank’s group then went on to carry out surveys on whether students would return money they had found or report being undercharged, both at the start and end of their courses. Economics students were more likely to see themselves and others as more self-interested following their studies than a control group studying astronomy. This was especially true among those studying under a tutor who taught game theory and focused on notions of survival imperatives militating against co-operation.

Subsequent work has questioned these findings, suggesting that selfish people are just more likely to study economics, and that Frank’s surveys and games tell us little about real-world moral behaviour. It is true that what individuals do in the highly artificial situation of being presented with the prisoner’s dilemma doesn’t necessarily tell us how they will behave in more complex real-world situations.

In related work, Eric Schwitzgebel has shown that students and teachers of ethical philosophy don’t seem to behave more ethically when their behaviour is assessed using a range of real-world variables. Perhaps, says Schwitzgebel, we shouldn’t be surprised that economics students who have been taught about the prisoner’s dilemma, act in line with what they’ve been taught when tested in a classroom. Again, this is a long way from showing any influence on real world behaviour, some argue.

The lessons of what people do in tests and games are limited because of the additional complexities involved in real-world moral choices with real and important consequences. Yet I hesitate to dismiss the results of these experiments. We shouldn’t leap to conclusions based on the few simple experiments that have been done, but if we tell students that it makes sense to see the world through the eyes of the selfish rational actor, my suspicion is that they are more likely to do so.

Multiple factors influence our behaviour, of which formal education is just one. Economics and economic opinions are also prominent throughout the news media, for instance. But what the experiments above demonstrate, in one small way at least, is that what we are taught about human behaviour can alter it.

This is my column from BBC Future last week. You can see the original here. Thanks to Eric for some references and comments on this topic.

The effect of diminished belief in free will

Studies have shown that people who believe things happen randomly and not through our own choice often behave much worse than those who believe the opposite.

Are you reading this because you chose to? Or are you doing so as a result of forces beyond your control?

After thousands of years of philosophy, theology, argument and meditation on the riddle of free will, I’m not about to solve it for you in this column (sorry). But what I can do is tell you about some thought-provoking experiments by psychologists, which suggest that, regardless of whether we have free will or not, whether we believe we do can have a profound impact on how we behave.

The issue is simple: we all make choices, but could those choices be made otherwise? From a religious perspective it might seem as if a divine being knows all, including knowing in advance what you will choose (so your choices could not be otherwise). Or we can take a physics-based perspective. Everything in the universe has physical causes, and as you are part of the universe, your choices must be caused (so your choices could not be otherwise). In either case, our experience of choosing collides with our faith in a world which makes sense because things have causes.

Consider for a moment how you would research whether a belief in free will affects our behaviour. There’s no point comparing the behaviour of people with different fixed philosophical perspectives. You might find that determinists, who believe free will is an illusion and that we are all cogs in a godless universe, behave worse than those who believe we are free to make choices. But you wouldn’t know whether this was simply because people who like to cheat and lie become determinists (the “Yes, I lied, but I couldn’t help it” excuse).

What we really need is a way of changing people’s beliefs about free will, so that we can track the effects of doing so on their behaviour. Fortunately, in recent years researchers have developed a standard method of doing this. It involves asking subjects to read sections from Francis Crick’s book The Astonishing Hypothesis. Crick was one of the co-discoverers of DNA’s double-helix structure, for which he was awarded the Nobel prize. Later in his career he left molecular biology and devoted himself to neuroscience. The hypothesis in question is his belief that our mental life is entirely generated by the physical stuff of the brain. One passage states that neuroscience has killed the idea of free will, an idea that most rational people, including most scientists, now believe is an illusion.

Psychologists have used this section of the book, or sentences taken from it or inspired by it, to induce feelings of determinism in experimental subjects. A typical study asks people to read and think about a series of sentences such as “Science has demonstrated that free will is an illusion”, or “Like everything else in the universe, all human actions follow from prior events and ultimately can be understood in terms of the movement of molecules”.

The effects on study participants are generally compared with those of other people asked to read sentences that assert the existence of free will, such as “I have feelings of regret when I make bad decisions because I know that ultimately I am responsible for my actions”, or texts on topics unrelated to free will.

And the results are striking. One study reported that participants who had their belief in free will diminished were more likely to cheat in a maths test. In another, US psychologists reported that people who read Crick’s thoughts on free will said they were less likely to help others.

Bad taste

A follow-up to this study used an ingenious method to test this via aggression to strangers. Participants were told a cover story about helping the experimenter prepare food for a taste test to be taken by a stranger. They were given the results of a supposed food preference questionnaire which indicated that the stranger liked most foods but hated hot food. Participants were also given a jar of hot sauce. The critical measure was how much of the sauce they put into the taste-test food. Putting in less sauce, when they knew that the taster didn’t like hot food, meant they scored more highly for what psychologists call “prosociality”, or what everyone else calls being nice.

You’ve guessed it: Participants who had been reading about how they didn’t have any free will chose to give more hot sauce to the poor fictional taster – twice as much, in fact, as those who read sentences supporting the idea of freedom of choice and responsibility.

In a recent study carried out at the University of Padova, Italy, researchers recorded the brain activity of participants who had been told to press a button whenever they wanted. This showed that people whose belief in free will had taken a battering thanks to reading Crick’s views showed a weaker signal in areas of the brain involved in preparing to move. In another study by the same team, volunteers carried out a series of on-screen tasks designed to test their reaction times, self control and judgement. Those told free will didn’t exist were slower, and more likely to go for easier and more automatic courses of action.

This is a young research area. We still need to check that individual results hold up, but taken all together these studies show that our belief in free will isn’t just a philosophical abstraction. We are less likely to behave ethically and kindly if our belief in free will is diminished.

This puts an extra burden of responsibility on philosophers, scientists, pundits and journalists who use evidence from psychology or neuroscience experiments to argue that free will is an illusion. We need to be careful about what stories we tell, given what we know about the likely consequences.

Fortunately, the evidence shows that most people have a sense of their individual freedom and responsibility that is resistant to being overturned by neuroscience. Those sentences from Crick’s book claim that most scientists believe free will to be an illusion. My guess is that most scientists would want to define what exactly is meant by free will, and to examine the various versions of free will on offer, before they agree whether it is an illusion or not.

If the last few thousands of years have taught us anything, the debate about free will may rumble on and on. But whether the outcome is inevitable or not, these results show that how we think about the way we think could have a profound effect on us, and on others.

This was published on BBC Future last week. See the original, ‘Does non-belief in free will make us better or worse?‘ (it is identical apart from the title, and there’s a nice picture on that site). If the neuroscience and the free will debate floats your boat, you can check out this video of the Sheffield Salon on the topic “‘My Brain Made Me Do It’ – have neuroscience and evolutionary psychology put free will on the slab?“. I’m the one on the left.

It is mind control but not as we know it

The Headlines

The Independent: First ever human brain-to-brain interface successfully tested

BBC News: Are we close to making human ‘mind control’ a reality?

Visual News: Mind Control is Now a Reality: UW Researcher Controls Friend Via an Internet Connection

The story

Using the internet, one researcher remotely controls the finger of another, using it to play a simple video game.

What they actually did

University of Washington researcher Rajesh Rao watches a very simple video game, which involved firing a cannon at incoming rockets (and avoiding firing at incoming supply planes). Electrical signals from his scalp were recorded using a technology called EEG and processed by a computer. The resulting signal was sent over the internet, and across campus, to a lab where another researcher, Andrea Stocco, watches the same video game with his finger over the “fire” button.

Unlike Rao, Stocco wears a magnetic coil over his head. This is designed to invoke electrical activity, not record it. When Rao imagines pressing the fire button, the coil activates the area of Stocco’s brain that makes his finger twitch, thus firing the cannon and completing a startling demonstration of “brain to brain” mind control over the internet.

You can read more details in the University of Washington press release or on the “brain2brain” website where this work is published.

How plausible is this?

EEG recording is a very well established technology, and takes advantage of the fact that the cells of our brain operate by passing around electrochemical signals which can be read from the surface of the scalp with simple electrodes. Unfortunately, the intricate details of brain activity tend to get muffled by the scalp, and the fact that you are recording at one specific point in space, so the technology’s strength is more in telling us that brain activity has changed, rather than in saying how or exactly where brain activity has changed.

The magnetic coil which made the receiver’s finger twitch is also well established, and known in the business as Transcranial Magnetic Stimulation (TMS). An alternating magnetic field is used to alter brain activity underneath the coil. I’ve written about it here before.

The effect is relatively crude. You can’t make someone play the violin, for example, but activating the motor cortex in the right region can generate a finger twitch. So, in summary, the story is very plausible. The researchers are well respected in this area and open about the limitations of their research. Although the experiment wasn’t published in a peer-reviewed journal, we have every reason to believe what we’re being told here.

Tom’s take

This is a wonderful piece of “proof of concept” research, which is completely plausible given existing technology, but yet hints at the possibilities which might soon become available.

The real magic is in the signal processing done. The dizzying complexities of brain activity are compressed into an EEG signal which is still highly complex, and pretty opaque as to what it means – hardly mind reading.

The research team then managed to find a reliable change in the EEG signal which reflected when Rao was thinking about pressing the fire button. The signal – just a simple “go”, as far as I can tell – was then sent over the internet. This “go” signal then triggered the TMS, which is either on or off.

In information terms, this is close to as simple as it gets. Even producing a signal which said what to fire at, as well as when to fire, would be a step change in complexity and wasn’t attempted by the group. TMS is a pretty crude device. Even if the signal the device received was more complex, it wouldn’t be able to make you perform complex, fluid movements, such as those required to track a moving object, tie your shoelaces or pluck a guitar. But this is a real example of brain to brain communication.

As the field develops the thing to watch is not whether this kind of communication can be done (we would have predicted it could be), but exactly how much information is contained in the communication.

A similar moral holds for reports that researchers can read thoughts from brain scans. This is true, but misleading. Many people imagine that such thought-reading gives researchers a read out in full technicolour mentalese, something like “I would like peas for dinner”. The reality is that such experiments allow the researchers to take a guess at what you are thinking based on them having already specified a very limited set of things which you can think about (for example peas or chips, and no other options).

Real progress on this front will come as we identify with more and more precision the brain areas that underlie complex behaviours. Armed with this knowledge, brain interface researchers will be able to use simple signals to generate complex responses by targeting specific circuits.

Read more

The original research report: Direct Brain-to-Brain Communication in Humans: A Pilot Study

Previously at The Conversation, another column on TMS: Does brain stimulation make you better at maths?

Thinking about brain interfaces is helped by a bit of information theory. To read a bit more about that field I recommend James Gleik’s book The Information: A History, a Theory, a Flood

The Conversation

This article was originally published at The Conversation.
Read the original article.

Drug addiction: The complex truth

We’re told studies have proven that drugs like heroin and cocaine instantly hook a user. But it isn’t that simple – little-known experiments over 30 years ago tell a very different tale.

Drugs are scary. The words “heroin” and “cocaine” make people flinch. It’s not just the associations with crime and harmful health effects, but also the notion that these substances can undermine the identities of those who take them. One try, we’re told, is enough to get us hooked. This, it would seem, is confirmed by animal experiments.

Many studies have shown rats and monkeys will neglect food and drink in favour of pressing levers to obtain morphine (the lab form of heroin). With the right experimental set up, some rats will self-administer drugs until they die. At first glance it looks like a simple case of the laboratory animals losing control of their actions to the drugs they need. It’s easy to see in this a frightening scientific fable about the power of these drugs to rob us of our free will.

But there is more to the real scientific story, even if it isn’t widely talked about. The results of a set of little-known experiments carried out more than 30 years ago paint a very different picture, and illustrate how easy it is for neuroscience to be twisted to pander to popular anxieties. The vital missing evidence is a series of studies carried out in the late 1970s in what has become known as “Rat Park”. Canadian psychologist Bruce Alexander, at the Simon Fraser University in British Columbia, Canada, suspected that the preference of rats to morphine over water in previous experiments might be affected by their housing conditions.

To test his hypothesis he built an enclosure measuring 95 square feet (8.8 square metres) for a colony of rats of both sexes. Not only was this around 200 times the area of standard rodent cages, but Rat Park had decorated walls, running wheels and nesting areas. Inhabitants had access to a plentiful supply of food, perhaps most importantly the rats lived in it together.

Rats are smart, social creatures. Living in a small cage on their own is a form of sensory deprivation. Rat Park was what neuroscientists would call an enriched environment, or – if you prefer to look at it this way – a non-deprived one. In Alexander’s tests, rats reared in cages drank as much as 20 times more morphine than those brought up in Rat Park. 

Inhabitants of Rat Park could be induced to drink more of the morphine if it was mixed with sugar, but a control experiment suggested that this was because they liked the sugar, rather than because the sugar allowed them to ignore the bitter taste of the morphine long enough to get addicted. When naloxone, which blocks the effects of morphine, was added to the morphine-sugar mix, the rats’ consumption didn’t drop. In fact, their consumption increased, suggesting they were actively trying to avoid the effects of morphine, but would put up with it in order to get sugar.

Woefully incomplete’

The results are catastrophic for the simplistic idea that one use of a drug inevitably hooks the user by rewiring their brain. When Alexander’s rats were given something better to do than sit in a bare cage they turned their noses up at morphine because they preferred playing with their friends and exploring their surroundings to getting high.

Further support for his emphasis on living conditions came from another set of tests his team carried out in which rats brought up in ordinary cages were forced to consume morphine for 57 days in a row. If anything should create the conditions for chemical rewiring of their brains, this should be it. But once these rats were moved to Rat Park they chose water over morphine when given the choice, although they did exhibit some minor withdrawal symptoms.

You can read more about Rat Park in the original scientific report. A good summary is in this comic by Stuart McMillen. The results aren’t widely cited in the scientific literature, and the studies were discontinued after a few years because they couldn’t attract funding. There have been criticisms of the study’s design and the few attempts that have been made to replicate the results have been mixed.

Nonetheless the research does demonstrate that the standard “exposure model” of addiction is woefully incomplete. It takes far more than the simple experience of a drug – even drugs as powerful as cocaine and heroin – to make you an addict. The alternatives you have to drug use, which will be influenced by your social and physical environment, play important roles as well as the brute pleasure delivered via the chemical assault on your reward circuits.

For a psychologist like me it suggests that even addictions can be thought of using the same theories we use to think about other choices, there isn’t a special exception for drug-related choices. Rat Park also suggests that when stories about the effects of drugs on the brain are promoted to the neglect of the discussion of the personal and social contexts of addiction, science is servicing our collective anxieties rather than informing us.

This is my BBC Future article from tuesday. The original is here. The Foddy article I link to in the last paragraph is great, read that. As is Stuart’s comic.