Radical embodied cognition: an interview with Andrew Wilson

adw_headshot_squareThe computational approach is the orthodoxy in psychological science. We try and understand the mind using the metaphors of information processing and the storage and retrieval of representations. These ideas are so common that it is easy to forget that there is any alternative. Andrew Wilson is on a mission to remind us that there is an alternative – a radical, non-representational, non-information processing take on what cognition is.

I sent him a few questions by email. After he answered these, and some follow up questions, we’ve both edited and agreed on the result, which you can read below.

 

Q1. Is it fair to say you are at odds with lots of psychology, theoretically? Can you outline why?

Psychology wants to understand what causes our behaviour. Cognitive psychology explanations are that behaviour is caused by internal states of the mind (or brain, if you like). These states are called mental representations, and they are models/simulations of the world that we use to figure out what to do and when to do it.

Cognitive psychology thinks we have representations because it assumes we have very poor sensory access to the world, e.g. vision supposedly begins with a patchy 2D image projected onto the back of the eye. We need these models to literally fill in the gaps by making an educated guess (‘inference’) about what caused those visual sensations.

My approach is called radical embodied cognitive psychology; ‘radical’ just means ‘no representations’. It is based on the work of James J Gibson. He was a perceptual psychologist who demonstrated that there is actually rich perceptual information about the world, and that we use this information. This is why perception and action are so amazingly successful most of the time, which is important because failures of perception have serious consequences for your health and wellbeing (e.g. falling on ice)

The most important consequence of this discovery is that when we have access to this information, we don’t need those internal models anymore. This then means that whatever the brain is doing, it’s not building models of the world in order to cause our behaviour. We are embedded in our environments and our behaviour is caused by the nature of that embedding (specifically, which information variables we are using for any given task).

So I ask very different questions than the typical psychologist: instead of ‘what mental model lets me solve this task?’ I ask ‘what information is there to support the observed behaviour and can I find evidence that we use it?’. When we get the right answer to the information question, we have great success in explaining and then predicting behaviour, which is actually the goal of psychology.

 

Q2. The idea that there are no mental representations is hard to get your head around. What about situations where behaviour seems to be based on things which aren’t there, like imagination, illusions or predictions?

First, saying that there are no mental representations is not saying that the brain is not up to something. This is a surprisingly common mistake, but I think it’s due to the fact cognitive psychologists have come to equate ‘brain activity’ with ‘representing’ and denying the latter means denying the former (see Is Embodied Cognition a No-Brainer?).

Illusions simply reveal how important it is to perception that we can move and explore. They are all based on a trick and they almost always require an Evil Psychologist™ lurking in the background. Specifically, illusions artificially restrict access to information so that the world looks like it’s doing one thing when it is really doing another. They only work if you don’t let people do anything to reveal the trick. Most visual illusions are revealed as such by exploring them, e.g by looking at them from a different perspective (e.g. the Ames Room).

Imagination and prediction are harder to talk about in this framework, but only because no one’s really tried. For what it’s worth, people are terrible at actively predicting things, and whatever imagination is it will be a side-effect of our ability to engage with the real world, not part of how we engage with the real world.

 

Q3. Is this radical approach really denying the reality of cognitive representations, or just using a different descriptive language in which they don’t figure? In other words, can you and the cognitivists both be right?

If the radical hypothesis is right, then a lot of cognitive theories will be wrong. Those theories all assume that information comes into the brain, is processed by representations and then output as behaviour. If we successfully replace representations with information, all those theories will be telling the wrong story. ‘Interacting with information’ is a completely different job description for the brain than ‘building models of the world’. This is another reason why it’s ‘radical’.

 

Q4. Even if I concede that you can think of the mind like this, can you convince me that I should? Why is it useful? What does this approach do for cognitive science that the conventional approach isn’t or cant’?

There are two reasons, I think. The first is empirical; this approach works very, very well. Whenever a researcher works through a problem using this approach, they find robust answers that stand up to extended scrutiny in the lab. These solutions then make novel predictions that also perform well  – examples are topics like the outfielder problem and the A-not-B error [see below for references]. Cognitive psychology is filled with small, difficult to replicate effects; this is actually a hint that we aren’t asking the right questions. Radical embodied cognitive science tends to produce large, robust and interpretable effects which I take as a hint that our questions are closer to the mark.

The second is theoretical. The major problem with representations is that it’s not clear where they get their content from. Representations supposedly encode knowledge about the world that we use to make inferences to support perception, etc. But if we have such poor perceptual contact with the world that we need representations, how did we ever get access to the knowledge we needed to encode? This grounding problem is a disaster. Radical embodiment solves it by never creating it in the first place – we are in excellent perceptual contact with our environments, so there are no gaps for representations to fill, therefore no representations that need content.

 

Q5. Who should we be reading to get an idea of this approach?

‘Beyond the Brain’ by Louise Barrett. It’s accessible and full of great stuff.

‘Radical Embodied Cognitive Science’ by Tony Chemero. It’s clear and well written but it’s pitched at trained scientists more than the generally interested lay person.

‘Embodied Cognition’ by Lawrence Shapiro that clearly lays out all the various flavours of ‘embodied cognition’. My work is the ‘replacement’ hypothesis.

‘The Ecological Approach to Visual Perception’ by James J Gibson is an absolute masterpiece and the culmination of all his empirical and theoretical work.

I run a blog at http://psychsciencenotes.blogspot.co.uk/ with Sabrina Golonka where we discuss all this a lot, and we tweet @PsychScientists. We’ve also published a few papers on this, the most relevant of which is ‘Embodied Cognition is Not What You Think It Is

 

Q6. And finally, can you point us to a few blog posts you’re proudest of which illustrate this way of looking at the world

What Else Could It Be? (where Sabrina looks at the question, what if the brain is not a computer?)

Mirror neurons, or, What’s the matter with neuroscience? (how the traditional model can get you into trouble)

Prospective Control – The Outfielder problem (an example of the kind of research questions we ask)

Downsides of being a convincing liar

People who take shortcuts can trick themselves into believing they are smarter than they are, says Tom Stafford, and it comes back to bite them.

Honesty may be the best policy, but lying has its merits – even when we are deceiving ourselves. Numerous studies have shown that those who are practised in the art of self-deception might be more successful in the spheres of sport and business. They might even be happier than people who are always true to themselves. But is there ever a downside to believing our own lies?

An ingenious study by Zoe Chance of Yale University tested the idea, by watching what happens when people cheat on tests.

Chance and colleagues ran experiments which involved asking students to answer IQ and general knowledge questions. Half the participants were given a copy of the test paper which had – apparently in error – been printed with the answers listed at the bottom. This meant they had to resist the temptation to check or improve their answers against the real answers as they went along.

Irresistible shortcut

As you’d expect, some of these participants couldn’t help but cheat. Collectively, the group that had access to the answers performed better on the tests than participants who didn’t – even though both groups of participants were selected at random from students at the same university, so were, on average, of similar ability.  (We can’t know for sure who was cheating – probably some of the people who had answers would have got high scores even without the answers – but it means that the average performance in the group was partly down to individual smarts, and partly down to having the answers at hand.)

The crucial question for Chance’s research was this: did people in the “cheater” group know that they’d been relying on the answers? Or did they attribute their success in the tests solely to their own intelligence?

The way the researchers tested this was to ask the students to predict how well they’d do on a follow-up test. They were allowed to quickly glance over the second test sheet so that they could see that it involved the same kind of questions – and, importantly, that no answers had been mistakenly been printed at the bottom this time. The researchers reasoned that if the students who had cheated realised that cheating wasn’t an option the second time around, they should predict they wouldn’t do as well on this second test.

Not so. Self-deception won the day. The people who’d had access to the answers predicted, on average, that they’d get higher scores on the follow-up – equivalent to giving them something like a 10-point IQ boost. When tested, of course, they scored far lower.

The researchers ran another experiment to check that the effect was really due to the cheaters’ inflated belief in their own abilities. In this experiment, students were offered a cash reward for accurately predicting their scores on the second test. Sure enough, those who had been given the opportunity to cheat overestimated their ability and lost out – earning 20% less than the other students.

The implication is that people in Chance’s experiment – people very much like you and me – had tricked themselves into believing they were smarter than they were. There may be benefits from doing this – confidence, satisfaction, or more easily gaining the trust of others – but there are also certainly disadvantages. Whenever circumstances change and you need to accurately predict how well you’ll do, it can cost to believe you’re better than you are.

That self-deception has its costs has some interesting implications. Morally, most of us would say that self-deception is wrong. But aside from whether self-deception is undesirable, we should expect it to be present in all of us to some degree (because of the benefits), but to be limited as well (because of the costs).

Self-deception isn’t something that is always better in larger doses – there must be an amount of it for which the benefits outweigh the costs, most of the time. We’re probably all self-deceiving to some degree. The irony being, because it is self-deception, we can’t know how often.

This is my BBC Future article from last week. The original is here

The scientist as problem solver

97px-Herbert_simon_red_completeStart the week with one of the founding fathers of cognitive science: in ‘The scientist as problem solver‘, Herb Simon (1916-2001) gives a short retrospective of his scientific career.

To tell the story of the research he has done, he advances a thesis: “The Scientist is a problem solver. If the thesis is true, then we can dispense with a theory of scientific discovery – the processes of discovery are just applications of the processes of problem solving.”. Quite aside from the usefulness of this perspective, the paper is an reminder of intoxicating possibility of integration across the physical, biological and social sciences: Simon worked on economics, management theory, complex systems and artificial intelligence as well as what we’d call now cognitive psychology.

He uses his own work on designing problem solving algorithms to reflect on how he – and other scientists – can and should make scientific progress. Towards the end he expresses what would be regarded as heresy in many experimentally orientated psychology departments. He suggests that many of his most productive investigations lacked a contrast between experimental and control conditions. Did this mean they were worthless, he asks. No:

…You can test theoretical models without contrasting an experimental with a control condition. And apart from testing models, you can often make surprising observations that give you ideas for new or improved models…

Perhaps it is not our methodology that needs revising so much as the standard textbook methodology, which perversely warns us against running an experiment until precise hypotheses have been formulated and experimental and control conditions defined. How do such experiments ever create surprise – not just the all-too-common surprise of having our hypotheses refuted by facts, but the delight-provoking surprise of encountering a wholly unexpected phenomenon? Perhaps we need to add to the textbooks a chapter, or several chapters, describing how basic scientific discoveries can be made by observing the world intently, in the laboratory or outside it, with controls or without them, heavy with hypotheses or innocent of them.

REFERENCE
Simon, H. A. (1989). The scientist as problem solver. Complex information processing: The impact of Herbert A. Simon, 375-398.

The smart unconscious

We feel that we are in control when our brains figure out puzzles or read words, says Tom Stafford, but a new experiment shows just how much work is going on underneath the surface of our conscious minds.

It is a common misconception that we know our own minds. As I move around the world, walking and talking, I experience myself thinking thoughts. “What shall I have for lunch?”, I ask myself. Or I think, “I wonder why she did that?” and try and figure it out. It is natural to assume that this experience of myself is a complete report of my mind. It is natural, but wrong.

There’s an under-mind, all psychologists agree – an unconscious which does a lot of the heavy lifting in the process of thinking. If I ask myself what is the capital of France the answer just comes to mind – Paris! If I decide to wiggle my fingers, they move back and forth in a complex pattern that I didn’t consciously prepare, but which was delivered for my use by the unconscious.

The big debate in psychology is exactly what is done by the unconscious, and what requires conscious thought. Or to use the title of a notable paper on the topic, ‘Is the unconscious smart or dumb?‘ One popular view is that the unconscious can prepare simple stimulus-response actions, deliver basic facts, recognise objects and carry out practised movements. Complex cognition involving planning, logical reasoning and combining ideas, on the other hand, requires conscious thought.

A recent experiment by a team from Israel scores points against this position. Ran Hassin and colleagues used a neat visual trick called Continuous Flash Suppression to put information into participants’ minds without them becoming consciously aware of it. It might sound painful, but in reality it’s actually quite simple. The technique takes advantage of the fact that we have two eyes and our brain usually attempts to fuse the two resulting images into a single coherent view of the world. Continuous Flash Suppression uses light-bending glasses to show people different images in each eye. One eye gets a rapid succession of brightly coloured squares which are so distracting that when genuine information is presented to the other eye, the person is not immediately consciously aware of it. In fact, it can take several seconds for something that is in theory perfectly visible to reach awareness (unless you close one eye to cut out the flashing squares, then you can see the ‘suppressed’ image immediately).

Hassin’s key experiment involved presenting arithmetic questions unconsciously. The questions would be things like “9 – 3 – 4 = ” and they would be followed by the presentation, fully visible, of a target number that the participants were asked to read aloud as quickly as possible. The target number could either be the right answer to the arithmetic question (so, in this case, “2”) or a wrong answer (for instance, “1”). The amazing result is that participants were significantly quicker to read the target number if it was the right answer rather than a wrong one. This shows that the equation had been processed and solved by their minds – even though they had no conscious awareness of it – meaning they were primed to read the right answer quicker than the wrong one.

The result suggests that the unconscious mind has more sophisticated capacities than many have thought. Unlike other tests of non-conscious processing, this wasn’t an automatic response to a stimulus – it required a precise answer following the rules of arithmetic, which you might have assumed would only come with deliberation. The report calls the technique used “a game changer in the study of the unconscious”, arguing that “unconscious processes can perform every fundamental, basic-level function that conscious processes can perform”.

These are strong claims, and the authors acknowledge that there is much work to do as we start to explore the power and reach of our unconscious minds. Like icebergs, most of the operation of our minds remains out of sight. Experiments like this give a glimpse below the surface.

This is my BBC Future column from last week. The original is here

Anti-vax: wrong but not irrational

badge

Since the uptick in outbreaks of measles in the US, those arguing for the right not to vaccinate their children have come under increasing scrutiny. There is no journal of “anti-vax psychology” reporting research on those who advocate what seems like a controversial, “anti-science” and dangerous position, but if there was we can take a good guess at what the research reported therein would say.

Look at other groups who hold beliefs at odds with conventional scientific thought. Climate sceptics for example. You might think that climate sceptics would be likely to be more ignorant of science than those who accept the consensus that humans are causing a global increase in temperatures. But you’d be wrong. The individuals with the highest degree of scientific literacy are not those most concerned about climate change, they are the group which is most divided over the issue. The most scientifically literate are also some of the strongest climate sceptics.

A driver of this is a process psychologists have called “biased assimilation” – we all regard new information in the light of what we already believe. In line with this, one study showed that climate sceptics rated newspaper editorials supporting the reality of climate change as less persuasive and less reliable than non-sceptics. Some studies have even shown that people can react to information which is meant to persuade them out of their beliefs by becoming more hardline – the exact opposite of the persuasive intent.

For topics such as climate change or vaccine safety, this can mean that a little scientific education gives you more ways of disagreeing with new information that don’t fit your existing beliefs. So we shouldn’t expect anti-vaxxers to be easily converted by throwing scientific facts about vaccination at them. They are likely to have their own interpretation of the facts.

High trust, low expertise

Some of my own research has looked at who the public trusted to inform them about the risks from pollution. Our finding was that how expert a particular group of people was perceived to be – government, scientists or journalists, say – was a poor predictor of how much they were trusted on the issue. Instead, what was critical was how much they were perceived to have the public’s interests at heart. Groups of people who were perceived to want to act in line with our respondents’ best interests – such as friends and family – were highly trusted, even if their expertise on the issue of pollution was judged as poor.

By implication, we might expect anti-vaxxers to have friends who are also anti-vaxxers (and so reinforce their mistaken beliefs) and to correspondingly have a low belief that pro-vaccine messengers such as scientists, government agencies and journalists have their best interests at heart. The corollary is that no amount of information from these sources – and no matter how persuasive to you and me – will convert anti-vaxxers who have different beliefs about how trustworthy the medical establishment is.

Interestingly, research done by Brendan Nyhan has shown many anti-vaxxers are willing to drop mistaken beliefs about vaccines, but as they do so they also harden in their intentions not to get their kids vaccinated. This shows that the scientific beliefs of people who oppose vaccinations are only part of the issue – facts alone, even if believed, aren’t enough to change people’s views.

Reinforced memories

We know from research on persuasion that mistaken beliefs aren’t easily debunked. Not only is the biased assimilation effect at work here but also the fragility of memory – attempts at debunking myths can serve to reinforce the memory of the myth while the debunking gets forgotten.

The vaccination issue provides a sobering example of this. A single discredited study from 1998 claimed a link between autism and the MMR jab, fuelling the recent distrust of vaccines. No matter how many times we repeat that “the MMR vaccine doesn’t cause autism”, the link between the two is reinforced in people’s perceptions. To avoid reinforcing a myth, you need to provide a plausible alternative – the obvious one here is to replace the negative message “MMR vaccine doesn’t cause autism”, with a positive one. Perhaps “the MMR vaccine protects your child from dangerous diseases”.

Rational selfishness

There are other psychological factors at play in the decisions taken by individual parents not to vaccinate their children. One is the rational selfishness of avoiding risk, or even the discomfort of a momentary jab, by gambling that the herd immunity of everyone else will be enough to protect your child.

Another is our tendency to underplay rare events in our calculation about risks – ironically the very success of vaccination programmes makes the diseases they protect us against rare, meaning that most of us don’t have direct experience of the negative consequences of not vaccinating. Finally, we know that people feel differently about errors of action compared to errors of inaction, even if the consequences are the same.

Many who seek to persuade anti-vaxxers view the issue as a simple one of scientific education. Anti-vaxxers have mistaken the basic facts, the argument goes, so they need to be corrected. This is likely to be ineffective. Anti-vaxxers may be wrong, but don’t call them irrational.

Rather than lacking scientific facts, they lack a trust in the establishments which produce and disseminate science. If you meet an anti-vaxxer, you might have more luck persuading them by trying to explain how you think science works and why you’ve put your trust in what you’ve been told, rather than dismissing their beliefs as irrational.

The Conversation

This article was originally published on The Conversation.
Read the original article.

You can’t play 20 questions with nature and win

You can’t play 20 questions with nature and win” is the title of Allen Newell‘s 1973 paper, a classic in cognitive science. In the paper he confesses that although he sees many excellent psychology experiments, all making undeniable scientific contributions, he can’t imagine them cohering into progress for the field as a whole. He describes the state of psychology as focussed on individual phenomena – mental rotation, chunking in memory, subitizing, etc – studied in a way to resolve binary questions – issues such as nature vs nature, conscious vs unconscious, serial vs parallel processing.

There is, I submit, a view of the scientific endeavor that is implicit (and sometimes explicit) in the picture I have presented above. Science advances by playing twenty questions with nature. The proper tactic is to frame a general question, hopefully binary, that can be attacked experimentally. Having settled that bits-worth, one can proceed to the next. The policy appears optimal – one never risks much, there is feedback from nature at every step, and progress is inevitable. Unfortunately, the questions never seem to be really answered, the strategy does not seem to work.

As I considered the issues raised (single code versus multiple code, continuous versus discrete representation, etc.) I found myself conjuring up this model of the current scientific process in psychology- of phenomena to be explored and their explanation by essentially oppositional concepts. And I couldn’t convince myself that it would add up, even in thirty more years of trying, even if one had another 300 papers of similar, excellent ilk.

His diagnosis for one reason that phenomena can generate an endless excellent papers without endless progress is that people can do the same task in different ways. Lots of experiments dissect how people are doing the task, without constraining sufficiently the things Newell says are essential to predict behaviour (the person’s goals and the structure of the task environment), and thus providing no insight into the ultimate target of investigation, the invariant structure of the mind’s processing mechanisms. As a minimum, we must know the method participants are using, never averaging over different methods, he concludes. But this may not be enough:

That the same human subject can adopt many (radically different) methods for the same basic task, depending on goal, background knowledge, and minor details of payoff structure and task texture — all this — implies that the “normal” means of science may not suffice.

As a prognosis for how to make real progress in understanding the mind he proposes three possible courses of action:

  1. Develop complete processing models – i.e. simulations which are competent to perform the task and include a specification of the way in which different subfunctions (called ‘methods’ by Newell) are deployed.
  2. Analyse a complex task, completely, ‘to force studies into intimate relation with each other’, the idea being that giving a full account of a single task, any task, will force contradictions between theories of different aspects of the task into the open.
  3. ‘One program for many tasks’ – construct a general purpose system which can perform all mental tasks, in other words an artificial intelligence.

It was this last strategy which preoccupied a lot of Newell’s subsequent attention. He developed a general problem solving architecture he called SOAR, which he presented as a unified theory of cognition, and which he worked on until his death in 1992.

The paper is over forty years old, but still full of useful thoughts for anyone interested in the sciences of the mind.

Reference and link:
Newell, A. You can’t play 20 questions with nature and win: Projective comments on the papers of this symposium. in Chase, W. G. (Ed.). (1973). Visual Information Processing: Proceedings of the Eighth Annual Carnegie Symposium on Cognition, Held at the Carnegie-Mellon University, Pittsburgh, Pennsylvania, May 19, 1972. Academic Press.

See a nice picture of Newell from the Computer History Museum

What gambling monkeys teach us about human rationality

We often make stupid choices when gambling, says Tom Stafford, but if you look at how monkeys act in the same situation, maybe there’s good reason.

When we gamble, something odd and seemingly irrational happens.

It’s called the ‘hot hand’ fallacy – a belief that your luck comes in streaks – and it can lose you a lot of money. Win on roulette and your chances of winning again aren’t more or less – they stay exactly the same. But something in human psychology resists this fact, and people often place money on the premise that streaks of luck will continue – the so called ‘hot hand’.

The opposite superstition is to bet that a streak has to end, in the false belief that independent events of chance must somehow even out. This is known as the gambler’s fallacy, and achieved notoriety at the Casino de Monte-Carlo on 18 August 1913. The ball fell on black 26 times in a row, and as the streak lengthened gamblers lost millions betting on red, believing that the chances changed with the length of the run of blacks.

Why do people act this way time and time again? We can discover intriguing insights, it seems, by recruiting monkeys and getting them to gamble too. If these animals make dumb choices like us, perhaps it could tell us more about ourselves.

First though, let’s look at what makes some games particularly likely to trigger these effects. Many results in games are based on a skill element, so it makes reasonable sense to bet, for instance, that a top striker like Lionel Messi is more likely to score a goal than a low-scoring defender.

Yet plenty of games contain randomness. For truly random events like roulette or the lottery, there is no force which makes clumps more or less likely to continue. Consider coin tosses: if you have tossed 10 heads in a row your chance of throwing another heads is still 50:50 (although, of course, at the point before you’ve thrown any, the overall odds of throwing 10 in a row is still minuscule).

The hot hand and gambler’s fallacies both show that we tend to have an unreasonable faith in the non-randomness of the universe, as if we can’t quite believe that those coins (or roulette wheels, or playing cards) really are due to the same chances on each flip, spin or deal.

It’s a result that sometimes makes us sneer at the irrationality of human psychology. But that conclusion may need revising.

Cross-species gambling

An experiment reported by Tommy Blanchard of the University of Rochester in New York State, and colleagues, shows that monkeys playing a gambling game are swayed by the same hot hand bias as humans. Their experiments involved three monkeys controlling a computer display with their eye-movements – indicating their choices by shifting their gaze left or right. In the experiment they were given two options, only one of which delivered a reward. When the correct option was random – the same 50:50 chance as a coin flip – the monkeys still had a tendency to select the previously winning option, as if luck should continue, clumping together in streaks.

The reason the result is so interesting is that monkeys aren’t taught probability theory as school. They never learn theories of randomness, or pick up complex ideas about chance events. The monkey’s choices must be based on some more primitive instincts about how the world works – they can’t be displaying irrational beliefs about probability, because they cannot have false beliefs, in the way humans can, about how luck works. Yet they show the same bias.

What’s going on, the researchers argue, is that it’s usually beneficial to behave in this manner. In most of life, chains of success or failure are linked for good reason – some days you really do have your eye on your tennis serve, or everything goes wrong with your car on the same day because the mechanics of the parts are connected. In these cases, the events reflect an underlying reality, and one you can take advantage of to predict what happens next. An example that works well for the monkeys is food. Finding high-value morsels like ripe food is a chance event, but also one where each instance isn’t independent. If you find one fruit on a tree the chances are that you’ll find more.

The wider lesson for students of human nature is that we shouldn’t be quick to call behaviours irrational. Sure, belief in the hot hand might make you bet wrong on a series of coin flips, or worse, lose a pot of money. But it may be that across the timespan in evolution, thinking that luck comes in clumps turned out to be useful more often than it was harmful.

This is my BBC Future article from last week. The original is here

Is public opinion rational?

There is no shortage of misconceptions. The British public believes that for every £100 spent on benefits, £24 is claimed fraudulently (the actual figure is £0.70). We think that 31% of the population are immigrants (actually its 13%). One recent headline summed it up: “British Public wrong about nearly everything, and I’d bet good money that it isn’t just the British who are exceptionally misinformed.

This looks like a problem for democracy, which supposes a rational and informed public opinion. But perhaps it isn’t, at least according to a body of political science research neatly summarised by Will Jennings in his chapter of a new book “Sex, lies & the ballot box: 50 things you need to know about British elections“. The book is a collection of accessible essays by British political scientists, and has a far wider scope than the book subtitle implies: there are important morals here for anyone interested in collective human behaviour, not just those interested in elections.

Will’s chapter discusses the “public opinion as thermostat” theory. This, briefly, is that the public can be misinformed about absolute statistics, but we can still change our strength of feeling in an appropriate way. So, for example, we may be misled about the absolute unemployment rate, but can still discern whether unemployment is getting better or worse. There’s evidence to support this view, and the chapter includes this striking graph (reproduced with permission), showing the percentage of people saying “unemployment” is the most important issue facing the country against the actual unemployment rate . As you can see public opinion tracks reality with remarkable accuracy:

Unemployment rate (source: ONS) and share of voters rating unemployment as the most important issue facing the country (source: ipsos-MORI), from Will Jenning's chapter in "Sex, lies & the ballot box" (p.35)
Unemployment rate and share of voters rating unemployment as the most important issue facing the country, from Will Jenning’s chapter in “Sex, lie & the ballot box” (p.35)

The topic of how a biased and misinformed public can make rational collective decisions is a fascinating one, which has received attention from disciplines ranging from psychology to political science. I’m looking forward to reading the rest of the book to get more evidence based insights into how our psychological biases play out when decision making is at the collective level of elections.

Full disclosure: Will is a friend of mine and sent me a free copy of the book.

Link: “Sex, lies & the ballot box (Edited by Philip Cowley & Robert Ford).

Link: Guardian data blog Five things we can learn from Sex, Lies and the Ballot Box

Why you can live a normal life with half a brain

A few extreme cases show that people can be missing large chunks of their brains with no significant ill-effect – why? Tom Stafford explains what it tells us about the true nature of our grey matter.

How much of our brain do we actually need? A number of stories have appeared in the news in recent months about people with chunks of their brains missing or damaged. These cases tell a story about the mind that goes deeper than their initial shock factor. It isn’t just that we don’t understand how the brain works, but that we may be thinking about it in the entirely wrong way.

Earlier this year, a case was reported of a woman who is missing her cerebellum, a distinct structure found at the back of the brain. By some estimates the human cerebellum contains half the brain cells you have. This isn’t just brain damage – the whole structure is absent. Yet this woman lives a normal life; she graduated from school, got married and had a kid following an uneventful pregnancy and birth. A pretty standard biography for a 24-year-old.

The woman wasn’t completely unaffected – she had suffered from uncertain, clumsy, movements her whole life. But the surprise is how she moves at all, missing a part of the brain that is so fundamental it evolved with the first vertebrates. The sharks that swam when dinosaurs walked the Earth had cerebellums.

This case points to a sad fact about brain science. We don’t often shout about it, but there are large gaps in even our basic understanding of the brain. We can’t agree on the function of even some of the most important brain regions, such as the cerebellum. Rare cases such as this show up that ignorance. Every so often someone walks into a hospital and their brain scan reveals the startling differences we can have inside our heads. Startling differences which may have only small observable effects on our behaviour.

Part of the problem may be our way of thinking. It is natural to see the brain as a piece of naturally selected technology, and in human technology there is often a one-to-one mapping between structure and function. If I have a toaster, the heat is provided by the heating element, the time is controlled by the timer and the popping up is driven by a spring. The case of the missing cerebellum reveals there is no such simple scheme for the brain. Although we love to talk about the brain region for vision, for hunger or for love, there are no such brain regions, because the brain isn’t technology where any function is governed by just one part.

Take another recent case, that of a man who was found to have a tapeworm in his brain. Over four years it burrowed “from one side to the other“, causing a variety of problems such as seizures, memory problems and weird smell sensations. Sounds to me like he got off lightly for having a living thing move through his brain. If the brain worked like most designed technology this wouldn’t be possible. If a worm burrowed from one side of your phone to the other, the gadget would die. Indeed, when an early electromechanical computer malfunctioned in the 1940s, an investigation revealed the problem: a moth trapped in a relay – the first actual case of a computer bug being found.

Part of the explanation for the brain’s apparent resilience is its ‘plasticity’ – an ability to adapt its structure based on experience. But another clue comes from a concept advocated by Nobel Prize-winning neuroscientist Gerald Edelman. He noticed that biological functions are often supported by multiple structures – single physical features are coded for by multiple genes, for example, so that knocking out any single gene can’t prevent that feature from developing apparently normally. He called the ability of multiple different structures to support a single function ‘degeneracy’.

And so it is with the brain. The important functions our brain carries out are not farmed out to single distinct brain regions, but instead supported by multiple regions, often in similar but slightly different ways. If one structure breaks down, the others can pick up the slack.

This helps explain why cognitive neuroscientists have such problems working out what different brain regions do. If you try and understand brain areas using a simple one-function-per-region and one-region-per-function rule you’ll never be able to design the experiments needed to unpick the degenerate tangle of structure and function.

The cerebellum is most famous for controlling precise movements, but other areas of the brain such as the basal ganglia and the motor cortex are also intimately involved in moving our bodies. Asking what unique thing each area does may be the wrong question, when they are all contributing to the same thing. Memory is another example of an essential biological function which seems to be supported by multiple brain systems. If you bump into someone you’ve met once before, you might remember that they have a reputation for being nice, remember a specific incident of them being nice, or just retrieve a vague positive feeling about them – all forms of memory which tell you to trust this person, and all supported by different brain areas doing the same job in a slightly different way.

Edelman and his colleague, Joseph Gally, called degeneracy a “ubiquitous biological property … a feature of complexity”, claiming it was an inevitable outcome of natural selection. It explains both why unusual brain conditions are not as catastrophic as they might be, and also why scientists find the brain so confounding to try and understand.

My BBC Future column from before Christmas. The original is here. Thanks to everyone on twitter who chipped in on the plural of cerebellum

A simple trick to improve your memory

Want to enhance your memory for facts? Tom Stafford explains a counterintuitive method for retaining information.

If I asked you to sit down and remember a list of phone numbers or a series of facts, how would you go about it? There’s a fair chance that you’d be doing it wrong.

One of the interesting things about the mind is that even though we all have one, we don’t have perfect insight into how to get the best from it. This is in part because of flaws in our ability to think about our own thinking, which is called metacognition. Studying this self-reflective thought process reveals that the human species has mental blind spots.

One area where these blind spots are particularly large is learning. We’re actually surprisingly bad at having insight into how we learn best.

Researchers Jeffrey Karpicke and Henry Roediger III set out to look at one aspect: how testing can consolidate our memory of facts. In their experiment they asked college students to learn pairs of Swahili and English words. So, for example, they had to learn that if they were given the Swahili word ‘mashua’ the correct response was ‘boat’. They could have used the sort of facts you might get on a high-school quiz (e.g. “Who wrote the first computer programs?”/”Ada Lovelace”), but the use of Swahili meant that there was little chance their participants could use any background knowledge to help them learn. After the pairs had all been learnt, there would be a final test a week later.

Now if many of us were revising this list we might study the list, test ourselves and then repeat this cycle, dropping items we got right. This makes studying (and testing) quicker and allows us to focus our effort on the things we haven’t yet learnt. It’s a plan that seems to make perfect sense, but it’s a plan that is disastrous if we really want to learn properly.

Karpicke and Roediger asked students to prepare for a test in various ways, and compared their success – for example, one group kept testing themselves on all items without dropping what they were getting right, while another group stopped testing themselves on their correct answers.

On the final exam differences between the groups were dramatic. While dropping items from study didn’t have much of an effect, the people who dropped items from testing performed relatively poorly: they could only remember about 35% of the word pairs, compared to 80% for people who kept testing items after they had learnt them.

It seems the effective way to learn is to practice retrieving items from memory, not trying to cement them in there by further study. Moreover, dropping items entirely from your revision, which is the advice given by many study guides, is wrong. You can stop studying them if you’ve learnt them, but you should keep testing what you’ve learnt if you want to remember them at the time of the final exam.

Finally, the researchers had the neat idea of asking their participants how well they would remember what they had learnt. All groups guessed at about 50%. This was a large overestimate for those who dropped items from test (and an underestimate from those who kept testing learnt items).

So it seems that we have a metacognitive blind spot for which revision strategies will work best. Making this a situation where we need to be guided by the evidence, and not our instinct. But the evidence has a moral for teachers as well: there’s more to testing than finding out what students know – tests can also help us remember.

Read more: Why cramming for tests often fails

This is my BBC Future column from last week. The original is here

The wrong sort of discussion

The Times Higher Education has an article on post-publication peer review, and whether it will survive legal challenges

The legal action launched by a US scientist who claims that anonymous comments questioning his science cost him a lucrative job offer has raised further questions about the potential for post-publication peer review to replace pre-publication review.

The article chimes with comments made by several prominent Psychologists who have been at the centre of controversies and decried the way their work has been discussed outside of the normal channels of the academic journals.

Earlier this year the head of a clinical trial of Tamiflu wrote to the British Medical Journal to protest that a BMJ journalist had solicited independent critique of the stats used in his work – “going beyond the reasonable response to a press release”.

John Bargh (Yale University) in his now infamous ‘nothing in their heads’ blogpost accused the open access journal PLoS of lacking “the usual high scientific journal standards of peer-review scrutiny”, and accussed Ed Yong – laughably – of “superficial online science journalism”. He concluded:

“I am not so much worried about the impact on science of essentially self-published failures to replicate as much as I’m worried about your ability to trust supposedly reputable online media sources for accurate information on psychological science.”

Simone Schnall (University of Cambridge) is a social psychologist whose work has also been at the centre of the discussion about replication (backstory, independent replication of her work recently reported). She has recently written that ‘no critical discussion is possible’ on social media, where ‘judgments are made quickly nowadays in social psychology and definitively’.

See also this comment from a scientist when a controversial paper which suggested that many correlations in fMRI studies of social psychological constructs were impossibly high was widely discussed before publication: . “I was shocked, this is not the way that scientific discourse should take place.”

The common theme is a lack of faith in the uncontrolled scientific discussion that now happens in public, before and after publication in the journal-sanctioned official record. Coupled, perhaps, with a lack of faith in other people to understand – let alone run – psychological research. Scientific discussion has always been uncontrolled, of course, the differences now are in how open the discussion is, and who takes part. Pre social media, ‘insider’ discussions of specialist topics took place inside psychology departments, and at conference dinners and other social gatherings of researchers. My optimistic take is that social media allows access to people who would not normally have it due to constraints on geography, finance or privilege. Social media means that if you’re in the wrong institution, aren’t funded, or if you have someone to look after at home that means you can’t fly to the conference, you can still experience and contribute to specialist discussions – that’s a massive and positive change and one we should protect as we work out how scientific discussion should take place in the 21st century.

Link: Simone Schnall’s comments in full: blog, video

Previously: Stafford, T., & Bell, V. (2012). Brain network: social media and the cognitive scientist. Trends in Cognitive Sciences, 16(10), 489–490. doi:10.1016/j.tics.2012.08.001

Previously What Jason Mitchell’s ‘On the emptiness of failed replications’ gets right, which includes some less optimistic notes on the current digital disruption of scholarly ways of working

Distraction effects

I’ve been puzzling over this tweet from Jeff Rouder:

jeffrouder

Surely, I thought, psychology is built out of effects. What could be wrong with focussing on testing which ones are reliable?

But I think I’ve got it now. The thing about effects is that they show you – an experimental psychologist – can construct a situation where some factor you are interested in is important, relative to all the other factors (which you have managed to hold constant).

To see why this might be a problem, consider this paper by Tsay (2013): “Sight over sound in the judgment of music performance”. This was a study which asked people to select the winners of a classical music competition from 6 second clips of them performing. Some participants got the audio, so they could only hear the performance; others got the video, so they could only see the performance; and some got both audio and video. Only those participants who watched the video, without sound, could select the actual competition winners at above chance level. This demonstrates a significant bias effect of sight in judgements of music performance.

To understand the limited importance of this effect, contrast with the overclaims made by the paper: “people actually depend primarily on visual information when making judgments about music performance” (in the abstract) and “[Musicians] relegate the sound of music to the role of noise” (the concluding line). Contrary to these claims the study doesn’t show that looks dominate sound in how we assess music. It isn’t the case that our musical taste is mostly determined by how musicians look.

The Tsay studies took the 3 finalists from classical music competitions – the best of the best of expert musicians – and used brief clips of their performances as stimuli. By my reckoning, this scenario removes almost all differences in quality of the musical performance. Evidence in support for this is that Tsay didn’t find any difference in performance between non-expert participants and professional musicians. This fact strongly suggests that she has designed a task in which it is impossible to bring any musical knowledge to bear. musical knowledge isn’t an important factor.

This is why it isn’t reasonable to conclude that people are making judgments about musical performance in general. The clips don’t let you judge relative musical quality, but – for these equally almost equally matched performances – they do let you reflect the same biases as the judges, biases which include an influence of appearance as well as sound. The bias matters, not least because it obviously affects who won, but proving it exists is completely separate from the matter of whether the overall judgements of music, is affected more by sight or sound.

Further, there’s every reason to think that the conclusion from the study of the bias effect gives the opposite conclusion to the study of overall importance. In these experiments sight dominates sound, because differences due to sound have been controlled out. In most situations where we decide our music preferences, sounds is obviously massively more important.

Many psychological effects are impressive tribute to the skill of experimenters in designing situations where most factors are held equal, allowing us to highlight the role of subtle psychological factors. But we shouldn’t let this blind us to the fact that the existence of an effect due to a psychological factor isn’t the same as showing how important this factor is relative to all others, nor is it the same as showing that our effect will hold when all these other factors start varying.

Link: Are classical music competitions judged on looks? – critique of Tsay (2013) written for The Conversation

Link: A good twitter thread on the related issue of effect size – and yah-boo to anyone who says you can’t have a substantive discussion on social media

UPDATE: The paper does give evidence that the sound stimuli used do influence people’s judgements systemmatically – it was incorrect of me to say that differences due to sound have been removed. I have corrected the post to reflect what I believe the study shows: that differences due to sound have been minimised, so that differences in looks are emphasised.

Explore our back pages

At our birthday party on Thursday I told people how I’d crunched the stats for the 10 years of mindhacks.com posts. Nearly 5000 posts, and over 2 million words – an incredible achievement (for which 96% of the credit should go to Vaughan).

In 2010 we had an overhaul (thanks JD for this, and Matt for his continued support of the tech side of the site). I had a look at the stats, which only date back till then, and pulled out our all time most popular posts. Here they are:

topten

Something about the enthusiasm of last Thursday inspired me to put the links the top ten posts on a wiki. Since it is a wiki anyone can jump in and edit, so if there are any bits of the mindhacks.com back catalogue that you think are worth leaving a placeholder to, feel free to add it. Vaughan and I will add links to a few of our favourite posts, so check back and see how it is coming along.

Link: Mind Hacks wiki

Evidence based debunking

Fed up with futile internet arguments, a bunch of psychologists investigated how best to correct false ideas. Tom Stafford discovers how to debunk properly.

We all resist changing our beliefs about the world, but what happens when some of those beliefs are based on misinformation? Is there a right way to correct someone when they believe something that’s wrong?

Stephen Lewandowsky and John Cook set out to review the science on this topic, and even carried out a few experiments of their own. This effort led to their “Debunker’s Handbook“, which gives practical, evidence-based techniques for correcting misinformation about, say, climate change or evolution. Yet the findings apply to any situation where you find the facts are falling on deaf ears.

The first thing their review turned up is the importance of “backfire effects” – when telling people that they are wrong only strengthens their belief. In one experiment, for example, researchers gave people newspaper corrections that contradicted their views and politics, on topics ranging from tax reform to the existence of weapons of mass destruction. The corrections were not only ignored – they entrenched people’s pre-existing positions.

Backfire effects pick up strength when you have no particular reason to trust the person you are talking to. This perhaps explains why climate sceptics with more scientific education tend to be the most sceptical that humans are causing global warming.

The irony is that understanding backfire effects requires that we debunk a false understanding of our own. Too often, argue Lewandowsky and Cook, communicators assume a ‘deficit model’ in their interactions with the misinformed. This is the idea that we have the right information, and all we need to do to make people believe is to somehow “fill in” the deficit in other people’s understanding. Just telling people the evidence for the truth will be enough to replace their false beliefs. Beliefs don’t work like that.

Psychological factors affect how we process information – such as what we already believe, who we trust and how we remember. Debunkers need to work with this, rather than against if they want the best chance of being believed.

The most important thing is to provide an alternative explanation. An experiment by Hollryn Johnson and Colleen Seifert, shows how to persuade people better. These two psychologists recruited participants to listen to news reports about a fictional warehouse fire, and then answer some comprehension questions.

Some of the participants were told that the fire was started by a short circuit in a closet near some cylinders containing potentially explosive gas. Yet when this information was corrected – by saying the closet was empty – they still clung to the belief.

A follow-up experiment showed the best way to effectively correct such misinformation. The follow-up was similar to the first experiment, except that it involved participants who were given a plausible alternative explanation: that evidence was found that arson caused the fire. It was only those who were given a plausible alternative that were able to let go of the misinformation about the gas cylinders.

Lewandowsky and Cook argue that experiments like these show the dangers of arguing against a misinformed position. If you try and debunk a myth, you may end up reinforcing that belief, strengthening the misinformation in people’s mind without making the correct information take hold.

What you must do, they argue, is to start with the plausible alternative (that obviously you believe is correct). If you must mention a myth, you should mention this second, and only after clearly warning people that you’re about to discuss something that isn’t true.

This debunking advice is also worth bearing in mind if you find yourself clinging to your own beliefs in the face of contradictory facts. You can’t be right all of the time, after all.

Read more about the best way to win an argument.

If you have an everyday psychological phenomenon you’d like to see written about in these columns please get in touch @tomstafford or ideas@idiolect.org.uk. Thanks to Ullrich Ecker for advice on this topic.

This is my BBC Future column from last week, original here

Why our faith in cramming is mistaken

You may think you know your own mind, but when it comes to memory, research suggests that you don’t. If we’re trying to learn something, many of us study in ways that prevent the memories sticking. Fortunately, the same research also reveals how we can supercharge our learning.

We’ve all had to face a tough exam at least once in our lives. Whether it’s a school paper, university final or even a test at work, there’s one piece of advice we’re almost always given: make a study plan. With a plan, we can space out our preparation for the test rather than relying on one or two intense study sessions the night before to see us through.

It’s good advice. Summed up in three words: cramming doesn’t work. Unfortunately, many of us ignore this rule. At least one survey has found that 99% of students admit to cramming.

You might think that’s down to nothing more than simple disorganisation: I’ll admit it is far easier to leave things to the last minute than start preparing for a test weeks or months ahead. But studies of memory suggest there’s something else going on. In 2009, for example, Nate Kornell at the University of California, Los Angeles, found that spacing out learning was more effective than cramming for 90% of the participants who took part in one of his experiments – and yet 72% of the participants thought that cramming had been more beneficial. What is happening in the brain that we trick ourselves this way?

Studies of memory suggest that we have a worrying tendency to rely on our familiarity with study items to guide our judgements of whether we know them. The problem is that familiarity is bad at predicting whether we can recall something.

Familiar, not remembered

After six hours of looking at study material (and three cups of coffee and five chocolate bars) it’s easy to think we have it committed to memory. Every page, every important fact, evokes a comforting feeling of familiarity. The cramming has left a lingering glow of activity in our sensory and memory systems, a glow that allows our brain to swiftly tag our study notes as “something that I’ve seen before”. But being able to recognise something isn’t the same as being able to recall it.

Different parts of the brain support different kinds of memory. Recognition is strongly affected by the ease with which information passes through the sensory areas of our brain, such as the visual cortex if you are looking at notes. Recall is supported by a network of different areas of the brain, including the frontal cortex and the temporal lobe, which coordinate to recreate a memory from the clues you give it. Just because your visual cortex is fluently processing your notes after five consecutive hours of you looking at them, doesn’t mean the rest of your brain is going to be able to reconstruct the memory of them when you really need it to.

This ability to make judgements about our own minds is called metacognition. Studying it has identified other misconceptions too. For instance, many of us think that actively thinking about trying to learn something will help us remember it. Studies suggest this is not the case. Far more important is reorganising the information so that it has a structure more likely to be retained in your memory. In other words, rewrite the content of what you want to learn in a way that makes most sense to you.

Knowing about common metacognitive errors means you can help yourself by assuming that you will make them. You can then try and counteract them. So, the advice to space out our study only makes sense if we assume that people aren’t already spacing out their study sessions enough (a safe assumption, given the research findings). We need to be reminded of the benefits of spaced learning because it runs counter to our instinct to relying on a comforting feeling of familiarity when deciding how to study

Put simply, we can sometimes have a surprising amount to gain from going against our normally reliable metacognitive instinct. How much should you space out your practice? Answer: a little bit more than you really want to.

This my BBC Future article from last week. The original is here

Problems with Bargh’s definition of unconscious

iceberg_cutI have a new paper out in Frontiers in Psychology: The perspectival shift: how experiments on unconscious processing don’t justify the claims made for them. There has been ongoing consternation about the reliability of some psychology research, particularly studies which make claims about unconscious (social) priming. However, even if we assume that the empirical results are reliable, the question remains whether the claims made for the power of the unconscious make any sense. I argue that they often don’t.

Here’s something from the intro:

In this commentary I draw attention to certain limitations on the inferences which can be drawn about participant’s awareness from the experimental methods which are routine in social priming research. Specifically, I argue that (1) a widely employed definition of unconscious processing, promoted by John Bargh is incoherent (2) many experiments involve a perspectival sleight of hand taking factors identified from comparison of average group performance and inappropriately ascribing them to the reasoning of individual participants.

The problem, I claim, is that many studies on ‘unconscious processing’, follow John Bargh in defining unconscious as meaning “not reported at the time”. This means that experimenters over-diagnose unconscious influence, when the possibility remains that participants were completely conscious of the influence of the stimili, but may not be reporting them because they have forgotten, worry about sounding silly or because the importance of the stimuli is genuinely trivial compared to other factors.

It is this last point which makes up the ‘perspectival shift’ of the title. Experiments on social priming usually work by comparing some measure (e.g. walking speed or reaction time) across two groups. My argument is that the factors which make up the total behaviour for each individual will be many and various. The single factor which the experimenter is interested in may have a non-zero effect, yet can still justifiably escape report by the majority of participants. To make this point concrete: if I ask you to judge how likeable someone is on the 1 to 7 scale, your judgement will be influenced by many factors, such as if they are like you, if you are in a good mood, the content of your interaction with the person, if they really are likeable and so on. Can we really expect participants to report an effect due to something that only the experimenter sees variation in, such as whether they are holding a hot drink or a cold drink at the time of judgement? We might as well expect them to report the effect due to them growing up in Europe rather than Asia, or being born in 1988 not 1938 (both surely non-zero effects in my hypothetical experiment).

More on this argument, and what I think it means, in the paper:

Stafford, T. (2014) The perspectival shift: how experiments on unconscious processing don’t justify the claims made for them. Frontiers in Psychology, 5, 1067. doi:10.3389/fpsyg.2014.01067

I originally started writing this commentary as a response to this paper by Julie Huang and John Bargh, which I believe is severely careless with the language it uses to discuss unconscious processing (and so a good example of the conceptual trouble you can get into if you start believing the hype around social priming).

Full disclosure: I am funded by the Leverhulme Trust to work on a project looking at the philosophy and psychology of implicit bias. This post is cross-posted on the project blog.