The smart unconscious

We feel that we are in control when our brains figure out puzzles or read words, says Tom Stafford, but a new experiment shows just how much work is going on underneath the surface of our conscious minds.

It is a common misconception that we know our own minds. As I move around the world, walking and talking, I experience myself thinking thoughts. “What shall I have for lunch?”, I ask myself. Or I think, “I wonder why she did that?” and try and figure it out. It is natural to assume that this experience of myself is a complete report of my mind. It is natural, but wrong.

There’s an under-mind, all psychologists agree – an unconscious which does a lot of the heavy lifting in the process of thinking. If I ask myself what is the capital of France the answer just comes to mind – Paris! If I decide to wiggle my fingers, they move back and forth in a complex pattern that I didn’t consciously prepare, but which was delivered for my use by the unconscious.

The big debate in psychology is exactly what is done by the unconscious, and what requires conscious thought. Or to use the title of a notable paper on the topic, ‘Is the unconscious smart or dumb?‘ One popular view is that the unconscious can prepare simple stimulus-response actions, deliver basic facts, recognise objects and carry out practised movements. Complex cognition involving planning, logical reasoning and combining ideas, on the other hand, requires conscious thought.

A recent experiment by a team from Israel scores points against this position. Ran Hassin and colleagues used a neat visual trick called Continuous Flash Suppression to put information into participants’ minds without them becoming consciously aware of it. It might sound painful, but in reality it’s actually quite simple. The technique takes advantage of the fact that we have two eyes and our brain usually attempts to fuse the two resulting images into a single coherent view of the world. Continuous Flash Suppression uses light-bending glasses to show people different images in each eye. One eye gets a rapid succession of brightly coloured squares which are so distracting that when genuine information is presented to the other eye, the person is not immediately consciously aware of it. In fact, it can take several seconds for something that is in theory perfectly visible to reach awareness (unless you close one eye to cut out the flashing squares, then you can see the ‘suppressed’ image immediately).

Hassin’s key experiment involved presenting arithmetic questions unconsciously. The questions would be things like “9 – 3 – 4 = ” and they would be followed by the presentation, fully visible, of a target number that the participants were asked to read aloud as quickly as possible. The target number could either be the right answer to the arithmetic question (so, in this case, “2”) or a wrong answer (for instance, “1”). The amazing result is that participants were significantly quicker to read the target number if it was the right answer rather than a wrong one. This shows that the equation had been processed and solved by their minds – even though they had no conscious awareness of it – meaning they were primed to read the right answer quicker than the wrong one.

The result suggests that the unconscious mind has more sophisticated capacities than many have thought. Unlike other tests of non-conscious processing, this wasn’t an automatic response to a stimulus – it required a precise answer following the rules of arithmetic, which you might have assumed would only come with deliberation. The report calls the technique used “a game changer in the study of the unconscious”, arguing that “unconscious processes can perform every fundamental, basic-level function that conscious processes can perform”.

These are strong claims, and the authors acknowledge that there is much work to do as we start to explore the power and reach of our unconscious minds. Like icebergs, most of the operation of our minds remains out of sight. Experiments like this give a glimpse below the surface.

This is my BBC Future column from last week. The original is here

Anti-vax: wrong but not irrational

badge

Since the uptick in outbreaks of measles in the US, those arguing for the right not to vaccinate their children have come under increasing scrutiny. There is no journal of “anti-vax psychology” reporting research on those who advocate what seems like a controversial, “anti-science” and dangerous position, but if there was we can take a good guess at what the research reported therein would say.

Look at other groups who hold beliefs at odds with conventional scientific thought. Climate sceptics for example. You might think that climate sceptics would be likely to be more ignorant of science than those who accept the consensus that humans are causing a global increase in temperatures. But you’d be wrong. The individuals with the highest degree of scientific literacy are not those most concerned about climate change, they are the group which is most divided over the issue. The most scientifically literate are also some of the strongest climate sceptics.

A driver of this is a process psychologists have called “biased assimilation” – we all regard new information in the light of what we already believe. In line with this, one study showed that climate sceptics rated newspaper editorials supporting the reality of climate change as less persuasive and less reliable than non-sceptics. Some studies have even shown that people can react to information which is meant to persuade them out of their beliefs by becoming more hardline – the exact opposite of the persuasive intent.

For topics such as climate change or vaccine safety, this can mean that a little scientific education gives you more ways of disagreeing with new information that don’t fit your existing beliefs. So we shouldn’t expect anti-vaxxers to be easily converted by throwing scientific facts about vaccination at them. They are likely to have their own interpretation of the facts.

High trust, low expertise

Some of my own research has looked at who the public trusted to inform them about the risks from pollution. Our finding was that how expert a particular group of people was perceived to be – government, scientists or journalists, say – was a poor predictor of how much they were trusted on the issue. Instead, what was critical was how much they were perceived to have the public’s interests at heart. Groups of people who were perceived to want to act in line with our respondents’ best interests – such as friends and family – were highly trusted, even if their expertise on the issue of pollution was judged as poor.

By implication, we might expect anti-vaxxers to have friends who are also anti-vaxxers (and so reinforce their mistaken beliefs) and to correspondingly have a low belief that pro-vaccine messengers such as scientists, government agencies and journalists have their best interests at heart. The corollary is that no amount of information from these sources – and no matter how persuasive to you and me – will convert anti-vaxxers who have different beliefs about how trustworthy the medical establishment is.

Interestingly, research done by Brendan Nyhan has shown many anti-vaxxers are willing to drop mistaken beliefs about vaccines, but as they do so they also harden in their intentions not to get their kids vaccinated. This shows that the scientific beliefs of people who oppose vaccinations are only part of the issue – facts alone, even if believed, aren’t enough to change people’s views.

Reinforced memories

We know from research on persuasion that mistaken beliefs aren’t easily debunked. Not only is the biased assimilation effect at work here but also the fragility of memory – attempts at debunking myths can serve to reinforce the memory of the myth while the debunking gets forgotten.

The vaccination issue provides a sobering example of this. A single discredited study from 1998 claimed a link between autism and the MMR jab, fuelling the recent distrust of vaccines. No matter how many times we repeat that “the MMR vaccine doesn’t cause autism”, the link between the two is reinforced in people’s perceptions. To avoid reinforcing a myth, you need to provide a plausible alternative – the obvious one here is to replace the negative message “MMR vaccine doesn’t cause autism”, with a positive one. Perhaps “the MMR vaccine protects your child from dangerous diseases”.

Rational selfishness

There are other psychological factors at play in the decisions taken by individual parents not to vaccinate their children. One is the rational selfishness of avoiding risk, or even the discomfort of a momentary jab, by gambling that the herd immunity of everyone else will be enough to protect your child.

Another is our tendency to underplay rare events in our calculation about risks – ironically the very success of vaccination programmes makes the diseases they protect us against rare, meaning that most of us don’t have direct experience of the negative consequences of not vaccinating. Finally, we know that people feel differently about errors of action compared to errors of inaction, even if the consequences are the same.

Many who seek to persuade anti-vaxxers view the issue as a simple one of scientific education. Anti-vaxxers have mistaken the basic facts, the argument goes, so they need to be corrected. This is likely to be ineffective. Anti-vaxxers may be wrong, but don’t call them irrational.

Rather than lacking scientific facts, they lack a trust in the establishments which produce and disseminate science. If you meet an anti-vaxxer, you might have more luck persuading them by trying to explain how you think science works and why you’ve put your trust in what you’ve been told, rather than dismissing their beliefs as irrational.

The Conversation

This article was originally published on The Conversation.
Read the original article.

What gambling monkeys teach us about human rationality

We often make stupid choices when gambling, says Tom Stafford, but if you look at how monkeys act in the same situation, maybe there’s good reason.

When we gamble, something odd and seemingly irrational happens.

It’s called the ‘hot hand’ fallacy – a belief that your luck comes in streaks – and it can lose you a lot of money. Win on roulette and your chances of winning again aren’t more or less – they stay exactly the same. But something in human psychology resists this fact, and people often place money on the premise that streaks of luck will continue – the so called ‘hot hand’.

The opposite superstition is to bet that a streak has to end, in the false belief that independent events of chance must somehow even out. This is known as the gambler’s fallacy, and achieved notoriety at the Casino de Monte-Carlo on 18 August 1913. The ball fell on black 26 times in a row, and as the streak lengthened gamblers lost millions betting on red, believing that the chances changed with the length of the run of blacks.

Why do people act this way time and time again? We can discover intriguing insights, it seems, by recruiting monkeys and getting them to gamble too. If these animals make dumb choices like us, perhaps it could tell us more about ourselves.

First though, let’s look at what makes some games particularly likely to trigger these effects. Many results in games are based on a skill element, so it makes reasonable sense to bet, for instance, that a top striker like Lionel Messi is more likely to score a goal than a low-scoring defender.

Yet plenty of games contain randomness. For truly random events like roulette or the lottery, there is no force which makes clumps more or less likely to continue. Consider coin tosses: if you have tossed 10 heads in a row your chance of throwing another heads is still 50:50 (although, of course, at the point before you’ve thrown any, the overall odds of throwing 10 in a row is still minuscule).

The hot hand and gambler’s fallacies both show that we tend to have an unreasonable faith in the non-randomness of the universe, as if we can’t quite believe that those coins (or roulette wheels, or playing cards) really are due to the same chances on each flip, spin or deal.

It’s a result that sometimes makes us sneer at the irrationality of human psychology. But that conclusion may need revising.

Cross-species gambling

An experiment reported by Tommy Blanchard of the University of Rochester in New York State, and colleagues, shows that monkeys playing a gambling game are swayed by the same hot hand bias as humans. Their experiments involved three monkeys controlling a computer display with their eye-movements – indicating their choices by shifting their gaze left or right. In the experiment they were given two options, only one of which delivered a reward. When the correct option was random – the same 50:50 chance as a coin flip – the monkeys still had a tendency to select the previously winning option, as if luck should continue, clumping together in streaks.

The reason the result is so interesting is that monkeys aren’t taught probability theory as school. They never learn theories of randomness, or pick up complex ideas about chance events. The monkey’s choices must be based on some more primitive instincts about how the world works – they can’t be displaying irrational beliefs about probability, because they cannot have false beliefs, in the way humans can, about how luck works. Yet they show the same bias.

What’s going on, the researchers argue, is that it’s usually beneficial to behave in this manner. In most of life, chains of success or failure are linked for good reason – some days you really do have your eye on your tennis serve, or everything goes wrong with your car on the same day because the mechanics of the parts are connected. In these cases, the events reflect an underlying reality, and one you can take advantage of to predict what happens next. An example that works well for the monkeys is food. Finding high-value morsels like ripe food is a chance event, but also one where each instance isn’t independent. If you find one fruit on a tree the chances are that you’ll find more.

The wider lesson for students of human nature is that we shouldn’t be quick to call behaviours irrational. Sure, belief in the hot hand might make you bet wrong on a series of coin flips, or worse, lose a pot of money. But it may be that across the timespan in evolution, thinking that luck comes in clumps turned out to be useful more often than it was harmful.

This is my BBC Future article from last week. The original is here

Is public opinion rational?

There is no shortage of misconceptions. The British public believes that for every £100 spent on benefits, £24 is claimed fraudulently (the actual figure is £0.70). We think that 31% of the population are immigrants (actually its 13%). One recent headline summed it up: “British Public wrong about nearly everything, and I’d bet good money that it isn’t just the British who are exceptionally misinformed.

This looks like a problem for democracy, which supposes a rational and informed public opinion. But perhaps it isn’t, at least according to a body of political science research neatly summarised by Will Jennings in his chapter of a new book “Sex, lies & the ballot box: 50 things you need to know about British elections“. The book is a collection of accessible essays by British political scientists, and has a far wider scope than the book subtitle implies: there are important morals here for anyone interested in collective human behaviour, not just those interested in elections.

Will’s chapter discusses the “public opinion as thermostat” theory. This, briefly, is that the public can be misinformed about absolute statistics, but we can still change our strength of feeling in an appropriate way. So, for example, we may be misled about the absolute unemployment rate, but can still discern whether unemployment is getting better or worse. There’s evidence to support this view, and the chapter includes this striking graph (reproduced with permission), showing the percentage of people saying “unemployment” is the most important issue facing the country against the actual unemployment rate . As you can see public opinion tracks reality with remarkable accuracy:

Unemployment rate (source: ONS) and share of voters rating unemployment as the most important issue facing the country (source: ipsos-MORI), from Will Jenning's chapter in "Sex, lies & the ballot box" (p.35)
Unemployment rate and share of voters rating unemployment as the most important issue facing the country, from Will Jenning’s chapter in “Sex, lie & the ballot box” (p.35)

The topic of how a biased and misinformed public can make rational collective decisions is a fascinating one, which has received attention from disciplines ranging from psychology to political science. I’m looking forward to reading the rest of the book to get more evidence based insights into how our psychological biases play out when decision making is at the collective level of elections.

Full disclosure: Will is a friend of mine and sent me a free copy of the book.

Link: “Sex, lies & the ballot box (Edited by Philip Cowley & Robert Ford).

Link: Guardian data blog Five things we can learn from Sex, Lies and the Ballot Box

Implicit racism in academia

teacher-309533_640Subtle racism is prevalent in US and UK universities, according to a new paper commissioned by the Leadership Foundation for Higher Education and released last week, reports The Times Higher Education.

Black professors surveyed for the paper said they were treated differently than white colleagues in the form of receiving less eye contact or requests for their opinion, that they felt excluded in meetings and experienced undermining of their work. “I have to downplay my achievements sometimes to be accepted” said one academic, explaining that colleagues that didn’t expect a black woman to be clever and articulate. Senior managers often dismiss racist incidents as conflicts of personalities or believe them to be exaggerated, found the paper.

And all this in institutions where almost all staff would say they are not just “not racist” but where many would say they were actively committed to fighting prejudice.

This seems like a clear case of the operation of implicit biases – where there is a contradiction between people’s egalitarian beliefs and their racist actions. Implicit biases are an industry in psychology, where tools such as the implicit association test (IAT) are used to measure them. The IAT is a fairly typical cognitive psychology-type study: individuals sit in front of a computer and the speed of their reactions to stimuli are measured (the stimuli are things like faces of people with different ethnicities, which is how we get out a measure of implicit prejudice).

The LFHE paper is a nice opportunity to connect this lab measure with the reality of implicit bias ‘in the wild’. In particular, along with some colleagues, I have been interested in exactly what an implicit bias, is, psychologically.

Commonly, implicit biases are described as if they are unconscious or somehow outside of the awareness of those holding them. Unfortunately, this hasn’t been shown to be the case (in fact the opposite may be true – there’s some evidence that people can predict their IAT scores fairly accurately). Worse, the very idea of being unaware of a bias is badly specified. Does ‘unaware’ mean you aren’t aware of your racist feelings? Of your racist behaviour? Of that the feelings, in this case, have produced the behaviour?

The racist behaviours reported in the paper – avoiding eye-contact, assuming that discrimination is due to personalities and not race, etc – could all work at any or all of these levels of awareness. Although the behaviours are subtle, and contradict people’s expressed, anti-racist, opinions, the white academics could still be completely aware. They could know that black academics make them feel awkward or argumentative, and know that this is due to their race. Or they could be completely unaware. They could know that they don’t trust the opinions of certain academics, for example, but not realise that race is a factor in why they feel this way.

Just because the behaviour is subtle, or the psychological phenomenon is called ‘implicit’, doesn’t mean we can be certain about what people really know about it. The real value in the notion of implicit bias is that it reminds us that prejudice can exist in how we behave, not just in what we say and believe.

Full disclosure: I am funded by the Leverhulme Trust to work on project looking at the philosophy and psychology of implicit bias . This post is cross-posted on the project blog. Run your own IAT with our open-source code: Open-IAT!

Why bad news dominates the headlines

Why are newspapers and TV broadcasts filled with disaster, corruption and incompetence? It may be because we’re drawn to depressing stories without realising, says psychologist Tom Stafford.

When you read the news, sometimes it can feel like the only things reported are terrible, depressing events. Why does the media concentrate on the bad things in life, rather than the good? And what might this depressing slant say about us, the audience?

It isn’t that these are the only things that happen. Perhaps journalists are drawn to reporting bad news because sudden disaster is more compelling than slow improvements. Or it could be that newsgatherers believe that cynical reports of corrupt politicians or unfortunate events make for simpler stories. But another strong possibility is that we, the readers or viewers, have trained journalists to focus on these things. Many people often say that they would prefer good news: but is that actually true?

To explore this possibility, researchers Marc Trussler and Stuart Soroka, set up an experiment, run at McGill University in Canada. They were dissatisfied with previous research on how people relate to the news – either the studies were uncontrolled (letting people browse news at home, for example, where you can’t even tell who is using the computer), or they were unrealistic (inviting them to select stories in the lab, where every participant knew their choices would be closely watched by the experimenter). So, the team decided to try a new strategy: deception.

 

Trick question

Trussler and Soroka invited participants from their university to come to the lab for “a study of eye tracking”. The volunteers were first asked to select some stories about politics to read from a news website so that a camera could make some baseline eye-tracking measures. It was important, they were told, that they actually read the articles, so the right measurements could be prepared, but it didn’t matter what they read.

After this ‘preparation’ phase, they watched a short video (the main purpose of the experiment as far as the subjects were concerned, but it was in fact just a filler task), and then they answered questions on the kind of political news they would like to read.

The results of the experiment, as well as the stories that were read most, were somewhat depressing. Participants often chose stories with a negative tone – corruption, set-backs, hypocrisy and so on – rather than neutral or positive stories. People who were more interested in current affairs and politics were particularly likely to choose the bad news.

And yet when asked, these people said they preferred good news. On average, they said that the media was too focussed on negative stories.

 

Danger reaction

The researchers present their experiment as solid evidence of a so called “negativity bias“, psychologists’ term for our collective hunger to hear, and remember bad news.

It isn’t just schadenfreude, the theory goes, but that we’ve evolved to react quickly to potential threats. Bad news could be a signal that we need to change what we’re doing to avoid danger.

As you’d expect from this theory, there’s some evidence that people respond quicker to negative words. In lab experiments, flash the word “cancer”, “bomb” or “war” up at someone and they can hit a button in response quicker than if that word is “baby”, “smile” or “fun” (despite these pleasant words being slightly more common). We are also able to recognise negative words faster than positive words, and even tell that a word is going to be unpleasant before we can tell exactly what the word is going to be.

So is our vigilance for threats the only way to explain our predilection for bad news? Perhaps not.

There’s another interpretation that Trussler and Soroka put on their evidence: we pay attention to bad news, because on the whole, we think the world is rosier than it actually is. When it comes to our own lives, most of us believe we’re better than average, and that, like the clichés, we expect things to be all right in the end. This pleasant view of the world makes bad news all the more surprising and salient. It is only against a light background that the dark spots are highlighted.

So our attraction to bad news may be more complex than just journalistic cynicism or a hunger springing from the darkness within.

And that, on another bad news day, gives me a little bit of hope for humanity.

Are women and men forever destined to think differently?

By Tom Stafford, University of Sheffield

The headlines

The Australian: Male and female brains still unequal

The International Institute for Applied Systems Analysis: Gender disparities in cognition will not diminish

The Economist: A variation in the cognitive abilities of the two sexes may be more about social development than gender stereotypes

The story

Everybody has an opinion on men, women and the difference (or not) between them. Now a new study has used a massive and long-running European survey to investigate how differences in cognitive ability are changing. This is super smart, because it offers us an escape from arguing about whether men and women are different in how they think, allowing us some insight into how any such differences might develop.

What they actually did

Researchers led by Daniela Weber at Austria’s International Institute for Applied Systems Analysis analysed data collected as part of the European Survey of Health, Ageing and Retirement. This includes data analysed in this study from approximately 31,000 adults, men and women all aged older than 50. As well as answering demographic questions, the survey participants took short quizzes which tested their memory, numeracy and verbal fluency (this last item involved a classic test which asks people to name as many animals as they could in 60 seconds). Alongside each test score, we have the year the participant was born in, as well as measures of gender equality and economic development for the country where they grew up.

What they found

The results show that as a country develops economically, the differences in cognitive ability between men and women change. But the pattern isn’t straightforward. Differences in verbal fluency disappear (so that an advantage on this test for men born in the 1920s over women is not found for those born in the 1950s). Differences in numeracy diminish (so the male advantage is less) and differences in memory actually increase (so that a female advantage is accentuated).

Further analysis looked at the how these differences in cognitive performance related to the amount of education men and women got. In all regions women tended to have fewer years of education, on average, then men. But, importantly, the size of this difference varied. This allowed the researchers to gauge how differences in education affected cognitive performance.

For all three abilities tested, there was a relationship between the size of the differences in the amount of education and the size of the difference in cognitive performance: fewer years of education for women was associated with worse scores for women, as you’d expect.

What varied for the three abilities was in the researchers’ predictions for the situation where men and women spent an equal amount of time in education: for memory this scenario was associated with a distinct female advantage, for numeracy a male advantage and for verbal fluency, there was no difference.

What this means

The thing that dogs studies on gender differences in cognition is the question of why these differences exist. People have such strong expectations, that they often leap to the assumption that any observed difference must reflect something fundamental about men vs women. Here, consider the example of the Australian newspaper which headlined their take on this story as telling us something about “male and female brains”, the implication being that the unequalness was a fundamental, biological, difference. In fact, research often shows that gender differences in cognitive performance are small, and even then we don’t know why these differences exist.

The great thing about this study is that by looking at how gender differences evolve over time it promises insight into what drives those difference in the first place. The fact that the female memory advantage increases as women are allowed more access to education is, on the face of it, suggestive evidence that at least one cognitive difference between men and women may be unleashed by more equal societies, rather than removed by them.

Tom’s take

The most important thing to take from this research is – as the authors report – increasing gender equality disproportionately benefits women. This is because – no surprise! – gender inequality disproportionately disadvantages women. Even in the area of cognitive performance, this historical denial of opportunities, health and education to women means, at a population level, they have more potential to increase their scores on these tests.

Along with other research on things like IQ, this study found systemmatic improvements in cognitive performance across time for both men and women – as everyone’s opportunities and health increases, so does their cognitive function.

But the provocative suggestion of this study is that as societies develop we won’t necessarily see all gender differences go away. Some cognitive differences may actually increase when women are at less of a disadvantage.

You don’t leap to conclusions based on one study, but this is a neat contribution. One caveat is that even though indices such as “years in education” show diminished gender inequality in Europe, you’d be a fool to think that societies which educated men and women for an equal number of years treated them both equally and put equal expectations on them.

Even if you thought this was true for 2014, you wouldn’t think this was true for European societies of the 1950s (when the youngest of these study participants were growing up). There could be very strong societal influences on cognitive ability – such as expecting women to be good with words and bad with numbers – that simply aren’t captured by the data analysed here.

Personally, I find it interesting to observe how keen people are to seize on such evidence that “essential” gender differences definitely do exist (despite the known confounds of living in a sexist society). My preferred strategy would be to hold judgement and focus on the remaking the definitely sexist society. For certain, we’ll only get the truth when we have an account of how cognitive abilities develop within both biological and social contexts. Studies like this point the way, and suggest that whatever the truth is, it should have some surprises for everyone.

Read more

The original research: The changing face of cognitive gender differences in Europe

My previous column on gender differences: Are men better wired to read maps or is it a tired cliché?

Cordelia Fine’s book, Delusions of gender: how our minds, society, and neuro-sexism create difference

The Conversation

This article was originally published on The Conversation.
Read the original article.

Do we really hate thinking so much we’d electrocute ourselves rather than do it?

By Tom Stafford, University of Sheffield

The headlines

The Guardian: Shocking but true: students prefer jolt of pain than being made to sit and think

Nature: We dislike being alone with our thoughts

Washington Post: Most men would rather shock themselves than be alone with their thoughts

 

The story

Quiet contemplation is so awful that when deprived of the distractions of noise, crowds or smart phones, a bunch of students would rather give themselves electric shocks than sit and think.

 

What they actually did

Psychologists from the universities of Virginia and Harvard in the US carried out a series of 11 studies in which participants – including students and non-students – were left in an unadorned room for six to 15 minutes and asked to “spend time entertaining themselves with their thoughts.” Both groups, and men and women equally, were unable to enjoy this task. Most said they found it difficult to concentrate and that their minds wandered.

In one of the studies, participants were given the option to give themselves an electric shock, for no given reason or reward. Many did, including the majority of male participants, despite the fact that the vast majority of participants had previously rated the shocks as unpleasant and said they would pay to avoid them.

 

How plausible is this?

This is a clever, provocative piece of research. The results are almost certainly reliable; the authors, some of whom are extremely distinguished, discovered in the 11 studies the same basic effect – namely, that being asked to sit and think wasn’t enjoyable. The data from the studies is also freely available, so there’s no chance of statistical jiggery-pokery. This is a real effect. The questions, then, are over what exactly the finding means.

 

Tom’s take

Contrary to what some reporters have implied, this result isn’t just about students – non-students also found being made to sit and think aversive, and there were no differences in this with age. And it isn’t just about men – women generally found the experience as unpleasant. The key result is that being made to sit and think is unpleasant so let’s look at this first before thinking about the shocks.

The results fit with research on sensory deprivation from 50 years ago. Paradoxically, when there are no distractions people find it hard to concentrate. It seems that for most of us, most of the time, our minds need to receive stimulus, interact with the environment, or at least have a task to function enjoyably. Thinking is an active process which involves the world – a far cry from some ideals of “pure thought”.

What the result certainly doesn’t mean, despite the interpretation given by some people – including one author of the study – is that people don’t like thinking. Rather, it’s fair to say that people don’t like being forced to do nothing but think.

It’s possible that there is a White Bear Effect here – also known as the ironic process theory. Famously, if you’re told to think of anything except a white bear, you can’t help but think about a white bear. If you imagine the circumstances of these studies, participants were told they had to sit in their chairs and just think. No singing, no exploring, no exercises. Wouldn’t that make you spend your time (unpleasantly) ruminating on what you couldn’t do?

In this context, are the shocks really so surprising? The shocks were very mild. The participants rated them as unpleasant when they were instructed to shock themselves, but we all know that there’s a big difference between having something done to you (or being told to do something) and choosing to do it yourself.

Although many participants chose to shock themselves I wouldn’t say they were avoiding thinking – rather they were thinking about what it would be like to get another shock. One participant shocked himself 190 times. Perhaps he was exploring how he could learn to cope with the discomfort. Curiosity and exploration are all hallmarks of thinking. It is only the very limited internally directed, stimulus-free kind of thinking to which we can apply the conclusion that it isn’t particular enjoyable.

 

Read more

The original paper: Just think: The challenges of the disengaged mind.

You can see the data over at the Open Science Framework.

Daniel Wegner’s brilliant book on the White Bear problem.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Brains in their feat

Footballers skills seem light years from our own. But, Tom Stafford argues, the jaw-dropping talents on the World Cup pitch have more in common with everyday life than you might think.

The first week of the 2014 World Cup has already given us a clutch of classic moments: Robin Van Persie’s perfect header to open the Dutch onslaught against the Spanish; Australian Tim Cahill’s breathtaking volley to equalise against Holland; and Mexican keeper Guillermo Ochoa defying an increasingly desperate Brazilian attack.

We can’t help but be dazzled by the skills on display. Whether it is a header lobbed over an open-mouthed goalie, or a keeper’s last-second leap to save the goal, it can seem as if the footballers have access to talents that are not just beyond description, but beyond conscious comprehension. But the players sprinting, diving and straining on Brazil’s football pitches have a lot more in common with everyday intelligence than you might think.

We often talk about astonishing athletic feats as if they are something completely different from everyday thought. When we say a footballer acts on instinct, out of habit or due to his training, we distance what they do from that we hear echoing within our own heads.

The idea of “muscle memory” encourages this – allowing us to cordon off feats of motor skill as a special kind of psychological phenomenon, something stored, like magic potion, in our muscles. But the truth, of course, is that so called muscle memories are stored in our brains, just like every other kind of memory. What is more, these examples of great skill are not so different from ordinary thought.

If you speak to world-class athletes, such as World Cup footballers, about what they do, they reveal that a lot of conscious reasoning goes into those moments of sublime skill. Here’s England’s Wayne Rooney, in 2012, describing what it feels like as a cross comes into the penalty box: “You’re asking yourself six questions in a split second. Maybe you’ve got time to bring it down on the chest and shoot, or you have to head it first-time. If the defender is there, you’ve obviously got to try and hit it first-time. If he’s farther back, you’ve got space to take a touch. You get the decision made. Then it’s obviously about the execution.”

All this in half a second! Rooney is obviously thinking more, not less, during these most crucial moments.

This is not an isolated example. Dennis Bergkamp delighted Dutch fans by scoring a beautiful winning goal from a long pass in the 1998 World Cup quarter final against Argentina (and if you watch a clip on YouTube, make sure it the one with the ecstatic commentary by Jack van Gelder). In a subsequent interview Bergkamp describes in minute detail all the factors leading up to the goal, from the moment he made eye contact with the defender who was about to pass the ball, to his calculations about how to control the ball. He even lets slip that part of his brain is keeping track of the wind conditions. Just as with Rooney, this isn’t just a moment of unconscious instinct, but of instinct combined with a whirlwind of conscious reasoning. And it all comes together.

Studies of the way the brain embeds new skills, until the movements become automatic, may help make sense of this picture. We know that athletes like those performing in the World Cup train with many years of deliberate, mindful, practice . As they go through their drills, dedicated brain networks develop, allowing the movements to be deployed with less effort and more control. As well as the brain networks involved becoming more refined, the areas of the brain most active in controlling a movement change with increased skill  – as we practice, areas deeper within the brain reorganise to take on more of the work, leaving the cortex, including areas associated with planning and reasoning, free to take on new tasks.

But this doesn’t mean we think less when we’re highly skilled. On the contrary, this process called automatisation means that we think differently. Bergkamp doesn’t have to think about his foot when he wants to control a ball, so he’s free to think about the wind, or the defender, or when  exactly he wants to control the ball. For highly practiced movements we have to think less about controlling every action but what we do is still ultimately in the service of our overall targets (like scoring a goal in the case of football). In line with this, and contrary to the idea of skills as robotic-reflexes, experiments show that more flexibility develops alongside increased automaticity.

Maybe we like to think footballers are stupid because we want to feel good about ourselves, and many footballers aren’t as articulate as some of the eggheads we traditionally associate with intelligence (and aren’t trained in being articulate), but all the evidence suggests that the feats we see in the World Cup take an immense amount of thought.

Intelligence involves using conscious deliberation at the right level to optimally control your actions. Driving a car is easier because you don’t have to think about the physics of the combustion engine, and it’s also easier because you no longer have to think about the movements required to change gear or turn on the indicators. But just because driving a car relies on automatic skills like these, doesn’t mean that you’re mindless when driving a car. The better drivers, just like the better footballers, are making more choices each time they show off their talents, not fewer.

So footballer’s immense skills aren’t that different from many everyday things we do like walking, talking or driving a car. We’ve practiced these things so much we don’t have to think about how we’re doing them. We may even not pay much attention to what we’re doing, or have much of a memory for them (ever reached the end of a journey and realised you don’t recall a single thing about the trip?), but that doesn’t mean that we aren’t or couldn’t. In fact, because we have practiced these skills we can deploy them at the same time as other things (walking and chewing gum, talking while tying our shoe laces, etc). This doesn’t diminish their mystery, but it does align it with the central mystery of psychology – how we learn to do anything.

So while you may be unlikely to find yourself in the boots of Bergkamp and Rooney, preparing to drill one past a sprawling keeper, you can at least console yourself with the thought that you’re showing the skills of a World Cup legend every time you get behind the wheel of your car.

A bonus BBC Future column from last week. Here’s the original.

How often do men really think about sex?

Every seven seconds? Probably not. But rather than wonder about whether this is true, Tom Stafford asks how on earth you can actually prove it or not.

We’ve all been told that men think about you-know-what far too often – every seven seconds, by some accounts. Most of us have entertained this idea for long enough to be sceptical. However, rather than merely wonder about whether this is true, stop for a moment to consider how you could – or could not – prove it.

If we believe the stats, thinking about sex every seven seconds adds up to 514 times an hour. Or approximately 7,200 times during each waking day. Is that a lot? It sounds like a big number to me, I’d imagine it’s bigger than the number of thoughts I have about anything in a day. So, here’s an interesting question: how is it possible to count the number of mine, or anyone else’s thoughts (sexual or otherwise) over the course of a day?

The scientific attempt to measure thoughts is known to psychologists as “experience sampling“. It involves interrupting people as they go about their daily lives and asking them to record the thoughts they are having right at that moment, in that place.

Terri Fisher and her research team at Ohio State University did this using ‘clickers’. They gave these to 283 college students, divided into three groups, and asked them to press and record each time they thought about sex, or food, or sleep.

Using this method they found that the average man in their study had 19 thoughts about sex a day. This was more than the women in their study – who had about 10 thoughts a day. However, the men also had more thoughts about food and sleep, suggesting perhaps that men are more prone to indulgent impulses in general. Or they are more likely to decide to count any vague feeling as a thought. Or some combination of both.

The interesting thing about the study was the large variation in number of thoughts. Some people said they thought about sex only once per day, whereas the top respondent recorded 388 clicks, which is a sexual thought about every two minutes.

However, the big confounding factor with this study is “ironic processes”, more commonly known as the “white bear problem“. If you want to have cruel fun with a child tell them to put their hand in their air and only put it down when they’ve stopped thinking about a white bear. Once you start thinking about something, trying to forget it just brings it back to mind.

This is exactly the circumstances the participants in Fisher’s study found themselves in. They were given a clicker by the researchers and asked to record when they thought about sex (or food or sleep). Imagine them walking away from the psychology department, holding the clicker in their hand, trying hard not to think about sex all the time, yet also trying hard to remember to press the clicker every time they did think about it. My bet is that the poor man who clicked 388 times was as much a victim of the experimental design as he was of his impulses.

Always on my mind

Another approach, used by Wilhelm Hoffman and colleagues, involved issuing German adult volunteers with smartphones, which were set to notify them seven times a day at random intervals for a week. They were asked to record what featured in their most recent thoughts when they received the random alert, the idea being that putting the responsibility for remembering onto a device left participants’ minds more free to wander.

The results aren’t directly comparable to the Fisher study, as the most anyone could record thinking about sex was seven times a day. But what is clear is that people thought about it far less often than the seven-second myth suggests. They recorded a sexual thought in the last half hour on approximately 4% of occasions, which works out as about once per day, compared with 19 reported in the Fisher study.

The real shock from Hoffman’s study is the relative unimportance of sex in the participants’ thoughts. People said they thought more about food, sleep, personal hygiene, social contact, time off, and (until about 5pm) coffee. Watching TV, checking email and other forms of media use also won out over sex for the entire day. In fact, sex only became a predominant thought towards the end of the day (around midnight), and even then it was firmly in second place, behind sleep.

Hoffman’s method is also contaminated by a white bear effect, though, because participants knew at some point during the day they’d be asked to record what they had been thinking about. This could lead to overestimating some thoughts. Alternately, people may have felt embarrassed about admitting to having sexual thoughts throughout the day, and therefore underreported it.

So, although we can confidently dismiss the story that the average male thinks about sex every seven seconds, we can’t know with much certainty what the true frequency actually is. Probably it varies wildly between people, and within the same person depending on their circumstances, and this is further confounded by the fact that any efforts to measure the number of someone’s thoughts risks changing those thoughts.

There’s also the tricky issue that thoughts have no natural unit of measurement. Thoughts aren’t like distances we can measure in centimetres, metres and kilometres. So what constitutes a thought, anyway? How big does it need to be to count? Have you had none, one or many while reading this? Plenty of things to think about!

This is a BBC Future column from last week. The original is here.

The best way to win an argument

How do you change someone’s mind if you think you are right and they are wrong? Psychology reveals the last thing to do is the tactic we usually resort to.

You are, I’m afraid to say, mistaken. The position you are taking makes no logical sense. Just listen up and I’ll be more than happy to elaborate on the many, many reasons why I’m right and you are wrong. Are you feeling ready to be convinced?

Whether the subject is climate change, the Middle East or forthcoming holiday plans, this is the approach many of us adopt when we try to convince others to change their minds. It’s also an approach that, more often than not, leads to the person on the receiving end hardening their existing position. Fortunately research suggests there is a better way – one that involves more listening, and less trying to bludgeon your opponent into submission.

A little over a decade ago Leonid Rozenblit and Frank Keil from Yale University suggested that in many instances people believe they understand how something works when in fact their understanding is superficial at best. They called this phenomenon “the illusion of explanatory depth“. They began by asking their study participants to rate how well they understood how things like flushing toilets, car speedometers and sewing machines worked, before asking them to explain what they understood and then answer questions on it. The effect they revealed was that, on average, people in the experiment rated their understanding as much worse after it had been put to the test.

What happens, argued the researchers, is that we mistake our familiarity with these things for the belief that we have a detailed understanding of how they work. Usually, nobody tests us and if we have any questions about them we can just take a look. Psychologists call this idea that humans have a tendency to take mental short cuts when making decisions or assessments the “cognitive miser” theory.

Why would we bother expending the effort to really understand things when we can get by without doing so? The interesting thing is that we manage to hide from ourselves exactly how shallow our understanding is.

It’s a phenomenon that will be familiar to anyone who has ever had to teach something. Usually, it only takes the first moments when you start to rehearse what you’ll say to explain a topic, or worse, the first student question, for you to realise that you don’t truly understand it. All over the world, teachers say to each other “I didn’t really understand this until I had to teach it”. Or as researcher and inventor Mark Changizi quipped: “I find that no matter how badly I teach I still learn something”.

Explain yourself

Research published last year on this illusion of understanding shows how the effect might be used to convince others they are wrong. The research team, led by Philip Fernbach, of the University of Colorado, reasoned that the phenomenon might hold as much for political understanding as for things like how toilets work. Perhaps, they figured, people who have strong political opinions would be more open to other viewpoints, if asked to explain exactly how they thought the policy they were advocating would bring about the effects they claimed it would.

Recruiting a sample of Americans via the internet, they polled participants on a set of contentious US policy issues, such as imposing sanctions on Iran, healthcare and approaches to carbon emissions. One group was asked to give their opinion and then provide reasons for why they held that view. This group got the opportunity to put their side of the issue, in the same way anyone in an argument or debate has a chance to argue their case.

Those in the second group did something subtly different. Rather that provide reasons, they were asked to explain how the policy they were advocating would work. They were asked to trace, step by step, from start to finish, the causal path from the policy to the effects it was supposed to have.

The results were clear. People who provided reasons remained as convinced of their positions as they had been before the experiment. Those who were asked to provide explanations softened their views, and reported a correspondingly larger drop in how they rated their understanding of the issues. People who had previously been strongly for or against carbon emissions trading, for example, tended to became more moderate – ranking themselves as less certain in their support or opposition to the policy.

So this is something worth bearing in mind next time you’re trying to convince a friend that we should build more nuclear power stations, that the collapse of capitalism is inevitable, or that dinosaurs co-existed with humans 10,000 years ago. Just remember, however, there’s a chance you might need to be able to explain precisely why you think you are correct. Otherwise you might end up being the one who changes their mind.

This is my BBC Future column from last week. The original is here.

Using rational argument to change minds

I have a longer piece in the latest issue of Contributoria: What’s the evidence on using rational argument to change people’s minds? Here’s a few snips from the opening:

Are we, the human species, unreasonable? Do rational arguments have any power to sway us, or is it all intuition, hidden motivations, and various other forms of prejudice?

…the picture of human rationality painted by our profession can seem pretty bleak. Every week I hear about a new piece of research which shows up some quirk of our minds, like the one about people given a heavy clip board judge public issues as more important than people given a light clip board. Or that more attractive people are judged as more trustworthy, or they arguments they give as more intelligent.

…I set out to get to the bottom of the evidence on how we respond to rational arguments. Does rationality lose out every time to irrational motivations? Or is there any hope to those of us who want to persuade because we have good arguments, not because we are handsome, or popular, or offer heavy clipboards.

You can read the full thing here, and while you’re over there check out the rest of the the Contributoria site – all of the articles on which are published under a CC license and commissioned by members. On which note, a massive thanks to everyone who backed my proposal and offered comments (see previous announcements). Special thanks to Josie and Dan for giving close readings to the piece before it was finished.

Edit: Contributoria didn’t last long, but I republished this essay and some others in an ebook “For argument’s sake: evidence that reason can change minds” (amazon, smashwords)

 

Research Digest posts, #1: A self-fulfilling fallacy?

This week I will be blogging over at the BPS Research Digest. The Digest was written for over ten years by psychology-writer extraordinaire Christian Jarrett, and I’m one of a series of guest editors during the transition period to a new permanent editor.

My first piece is now up, and here is the opening:

Lady Luck is fickle, but many of us believe we can read her mood. A new study of one year’s worth of bets made via an online betting site shows that gamblers’ attempts to predict when their luck will turn has some unexpected consequences.

Read the rest over at the digest, I’ll post about the other stories I’ve written as they go up.

What’s the evidence for the power of reason to change minds?

Last month I proposed an article for Contributoria, titled What’s the evidence on using rational argument to change people’s minds?. Unfortunately, I had such fun reading about the topic that I missed the end-of-month deadline and now need to get backers for my proposal again.

So, here’s something from my proposal, please consider backing it so I can put my research to good use:

Is it true that “you can’t tell anybody anything”? From pub arguments to ideology-driven party political disputes it can sometimes seem like people have their minds all made up, that there’s no point trying to persuade anybody of anything. Popular psychology books reinforce the idea that we’re emotional, irrational creatures (Dan Ariely “Predictably irrational”, David McRaney “You Are Not So Smart”). This piece will be 3000 words on the evidence from psychological science about persuasion by rational argument.

All you need to do to back proposals, currently, is sign up for the site. You can see all current proposals here. Written articles are Creative Commons licensed.

Back the proposal: What’s the evidence on using rational argument to change people’s minds?

Full disclosure: I’ll be paid by Contributoria if the proposal is backed

Update:: Backed! Thanks all! Watch this space for the finished article. I promise I’ll make the deadline this time

What’s the evidence on using rational argument to change people’s minds?

Contributoria is an experiment in community funded, collaborative journalism. What that means is that you can propose an article you’d like to write, and back proposals by others that you’d like to see written. There’s an article I’d like to write: What’s the evidence on using rational argument to change people’s minds?. Here’s something from the proposal:

Is it true that “you can’t tell anybody anything”? From pub arguments to ideology-driven party political disputes it can sometimes seem like people have their minds all made up, that there’s no point trying to persuade anybody of anything. Popular psychology books reinforce the idea that we’re emotional, irrational creatures (Dan Ariely “Predictably irrational”, David McRaney “You Are Not So Smart”). This piece will be 2000 words on the evidence from psychological science about persuasion by rational argument.

If the proposal is backed it will give me a chance to look at the evidence on things like the , on whether political extremism is supported by an illusion of explanatory depth (and how that can be corrected), and on how we treat all those social psychology priming experiments which suggest that our opinions on things can be pushed about by irrelevant factors such as the weight of a clipboard we’re holding.

All you need to do to back proposals, currently, is sign up for the site. You can see all current proposals here. Written articles are Creative Commons licensed.

Back the proposal: What’s the evidence on using rational argument to change people’s minds?

Full disclosure: I’ll be paid by Contributoria if the proposal is backed

Update: Backed! That was quick! Much thanks mindhacks.com readers! I’d better get reading and writing now…

Why Christmas rituals make tasty food

All of us carry out rituals in our daily lives, whether it is shaking hands or clinking glasses before we drink. At this time of year, the performance of customs and traditions is widespread – from sharing crackers, to pulling the wishbone on the turkey and lighting the Christmas pudding.

These rituals might seem like light-hearted traditions, but I’m going to try and persuade you that they are echoes of our evolutionary history, something which can tell us about how humans came to relate to each other before we had language. And the story starts by exploring how rituals can make our food much tastier.

In recent years, studies have suggested that performing small rituals can influence people’s enjoyment of what they eat. In one experiment, Kathleen Vohs from the University of Minnesota and colleagues explored how ritual affected people’s experience of eating a chocolate bar. Half of the people in the study were instructed to relax for a moment and then eat the chocolate bar as they normally would. The other half were given a simple ritual to perform, which involved breaking the chocolate bar in half while it was still inside its wrapper, and then unwrapping each half and eating it in turn.

Something about carefully following these instructions before eating the chocolate bar had a dramatic effect. People who had focused on the ritual said they enjoyed eating the chocolate more, rating the experience 15% higher than the control group. They also spent longer eating the chocolate, savouring the flavour for 50% longer than the control group. Perhaps most persuasively, they also said they would pay almost twice as much for such a chocolate.

This experiment shows that a small act can significantly increase the value we get from a simple food experience. Vohs and colleagues went on to test the next obvious question – how exactly do rituals work this magic? Repeating the experiment, they asked participants to describe and rate the act of eating the chocolate bar. Was it fun? Boring? Interesting? This seemed to be a critical variable – those participants who were made to perform the ritual rated the experience as more fun, less boring and more interesting. Statistical analysis showed that this was the reason they enjoyed the chocolate more, and were more willing to pay extra.

So, rituals appear to make people pay attention to what they are doing, allowing them to concentrate their minds on the positives of a simple pleasure. But could there be more to rituals? Given that they appear in many realms of life that have nothing to do with food –from religious services to presidential inaugurations – could their performance have deeper roots in our evolutionary history? Attempting to answer the question takes us beyond the research I’ve been discussing so far and into the complex and controversial debate about the evolution of human nature.

In his book, The Symbolic Species, Terrance Deacon claims that ritual played a special role in human evolution, in particular, at the transition point where we began to acquire the building blocks of language. Deacon’s argument is that the very first “symbols” we used to communicate, the things that became the roots of human language, can’t have been anything like the words we use so easily and thoughtlessly today. He argues that these first symbols would have been made up of extended, effortful and complex sequences of behaviours performed in a group – in other words, rituals. These symbols were needed because of the way early humans arranged their family groups and, in particular, shared the products of hunting. Early humans needed a way to tell each other who had what responsibilities and which privileges; who was part of the family, and who could share the food, for instance. These ideas are particularly hard to refer to by pointing. Rituals, says Deacon, were the evolutionary answer to the conundrum of connecting human groups and checking they had a shared understanding of how the group worked.

If you buy this evolutionary story – and plenty don’t – it gives you a way to understand why exactly our minds might have a weakness for ritual. A small ritual makes food more enjoyable, but why does it have that effect? Deacon’s answer is that our love of rituals evolved with our need to share food. Early humans who enjoyed rituals had more offspring. I speculate that an easy shortcut for evolution to find to make us enjoy rituals is by connecting our minds to that the rituals make the food more enjoyable.

So, for those sitting down with family this holiday, don’t skip the traditional rituals – sing the songs, pull the crackers, clink the glasses and listen to Uncle Vinnie repeat his funny anecdotes for the hundredth time. The rituals will help you enjoy the food more, and carry with them an echo of our long history as a species, and all the feasts the tribe shared before there even was Christmas.

This is my latest column for BBC Future. You can see the original here. Merry Christmas y’all!