Radical embodied cognition: an interview with Andrew Wilson

adw_headshot_squareThe computational approach is the orthodoxy in psychological science. We try and understand the mind using the metaphors of information processing and the storage and retrieval of representations. These ideas are so common that it is easy to forget that there is any alternative. Andrew Wilson is on a mission to remind us that there is an alternative – a radical, non-representational, non-information processing take on what cognition is.

I sent him a few questions by email. After he answered these, and some follow up questions, we’ve both edited and agreed on the result, which you can read below.


Q1. Is it fair to say you are at odds with lots of psychology, theoretically? Can you outline why?

Psychology wants to understand what causes our behaviour. Cognitive psychology explanations are that behaviour is caused by internal states of the mind (or brain, if you like). These states are called mental representations, and they are models/simulations of the world that we use to figure out what to do and when to do it.

Cognitive psychology thinks we have representations because it assumes we have very poor sensory access to the world, e.g. vision supposedly begins with a patchy 2D image projected onto the back of the eye. We need these models to literally fill in the gaps by making an educated guess (‘inference’) about what caused those visual sensations.

My approach is called radical embodied cognitive psychology; ‘radical’ just means ‘no representations’. It is based on the work of James J Gibson. He was a perceptual psychologist who demonstrated that there is actually rich perceptual information about the world, and that we use this information. This is why perception and action are so amazingly successful most of the time, which is important because failures of perception have serious consequences for your health and wellbeing (e.g. falling on ice)

The most important consequence of this discovery is that when we have access to this information, we don’t need those internal models anymore. This then means that whatever the brain is doing, it’s not building models of the world in order to cause our behaviour. We are embedded in our environments and our behaviour is caused by the nature of that embedding (specifically, which information variables we are using for any given task).

So I ask very different questions than the typical psychologist: instead of ‘what mental model lets me solve this task?’ I ask ‘what information is there to support the observed behaviour and can I find evidence that we use it?’. When we get the right answer to the information question, we have great success in explaining and then predicting behaviour, which is actually the goal of psychology.


Q2. The idea that there are no mental representations is hard to get your head around. What about situations where behaviour seems to be based on things which aren’t there, like imagination, illusions or predictions?

First, saying that there are no mental representations is not saying that the brain is not up to something. This is a surprisingly common mistake, but I think it’s due to the fact cognitive psychologists have come to equate ‘brain activity’ with ‘representing’ and denying the latter means denying the former (see Is Embodied Cognition a No-Brainer?).

Illusions simply reveal how important it is to perception that we can move and explore. They are all based on a trick and they almost always require an Evil Psychologist™ lurking in the background. Specifically, illusions artificially restrict access to information so that the world looks like it’s doing one thing when it is really doing another. They only work if you don’t let people do anything to reveal the trick. Most visual illusions are revealed as such by exploring them, e.g by looking at them from a different perspective (e.g. the Ames Room).

Imagination and prediction are harder to talk about in this framework, but only because no one’s really tried. For what it’s worth, people are terrible at actively predicting things, and whatever imagination is it will be a side-effect of our ability to engage with the real world, not part of how we engage with the real world.


Q3. Is this radical approach really denying the reality of cognitive representations, or just using a different descriptive language in which they don’t figure? In other words, can you and the cognitivists both be right?

If the radical hypothesis is right, then a lot of cognitive theories will be wrong. Those theories all assume that information comes into the brain, is processed by representations and then output as behaviour. If we successfully replace representations with information, all those theories will be telling the wrong story. ‘Interacting with information’ is a completely different job description for the brain than ‘building models of the world’. This is another reason why it’s ‘radical’.


Q4. Even if I concede that you can think of the mind like this, can you convince me that I should? Why is it useful? What does this approach do for cognitive science that the conventional approach isn’t or cant’?

There are two reasons, I think. The first is empirical; this approach works very, very well. Whenever a researcher works through a problem using this approach, they find robust answers that stand up to extended scrutiny in the lab. These solutions then make novel predictions that also perform well  – examples are topics like the outfielder problem and the A-not-B error [see below for references]. Cognitive psychology is filled with small, difficult to replicate effects; this is actually a hint that we aren’t asking the right questions. Radical embodied cognitive science tends to produce large, robust and interpretable effects which I take as a hint that our questions are closer to the mark.

The second is theoretical. The major problem with representations is that it’s not clear where they get their content from. Representations supposedly encode knowledge about the world that we use to make inferences to support perception, etc. But if we have such poor perceptual contact with the world that we need representations, how did we ever get access to the knowledge we needed to encode? This grounding problem is a disaster. Radical embodiment solves it by never creating it in the first place – we are in excellent perceptual contact with our environments, so there are no gaps for representations to fill, therefore no representations that need content.


Q5. Who should we be reading to get an idea of this approach?

‘Beyond the Brain’ by Louise Barrett. It’s accessible and full of great stuff.

‘Radical Embodied Cognitive Science’ by Tony Chemero. It’s clear and well written but it’s pitched at trained scientists more than the generally interested lay person.

‘Embodied Cognition’ by Lawrence Shapiro that clearly lays out all the various flavours of ‘embodied cognition’. My work is the ‘replacement’ hypothesis.

‘The Ecological Approach to Visual Perception’ by James J Gibson is an absolute masterpiece and the culmination of all his empirical and theoretical work.

I run a blog at http://psychsciencenotes.blogspot.co.uk/ with Sabrina Golonka where we discuss all this a lot, and we tweet @PsychScientists. We’ve also published a few papers on this, the most relevant of which is ‘Embodied Cognition is Not What You Think It Is


Q6. And finally, can you point us to a few blog posts you’re proudest of which illustrate this way of looking at the world

What Else Could It Be? (where Sabrina looks at the question, what if the brain is not a computer?)

Mirror neurons, or, What’s the matter with neuroscience? (how the traditional model can get you into trouble)

Prospective Control – The Outfielder problem (an example of the kind of research questions we ask)

The scientist as problem solver

97px-Herbert_simon_red_completeStart the week with one of the founding fathers of cognitive science: in ‘The scientist as problem solver‘, Herb Simon (1916-2001) gives a short retrospective of his scientific career.

To tell the story of the research he has done, he advances a thesis: “The Scientist is a problem solver. If the thesis is true, then we can dispense with a theory of scientific discovery – the processes of discovery are just applications of the processes of problem solving.”. Quite aside from the usefulness of this perspective, the paper is an reminder of intoxicating possibility of integration across the physical, biological and social sciences: Simon worked on economics, management theory, complex systems and artificial intelligence as well as what we’d call now cognitive psychology.

He uses his own work on designing problem solving algorithms to reflect on how he – and other scientists – can and should make scientific progress. Towards the end he expresses what would be regarded as heresy in many experimentally orientated psychology departments. He suggests that many of his most productive investigations lacked a contrast between experimental and control conditions. Did this mean they were worthless, he asks. No:

…You can test theoretical models without contrasting an experimental with a control condition. And apart from testing models, you can often make surprising observations that give you ideas for new or improved models…

Perhaps it is not our methodology that needs revising so much as the standard textbook methodology, which perversely warns us against running an experiment until precise hypotheses have been formulated and experimental and control conditions defined. How do such experiments ever create surprise – not just the all-too-common surprise of having our hypotheses refuted by facts, but the delight-provoking surprise of encountering a wholly unexpected phenomenon? Perhaps we need to add to the textbooks a chapter, or several chapters, describing how basic scientific discoveries can be made by observing the world intently, in the laboratory or outside it, with controls or without them, heavy with hypotheses or innocent of them.

Simon, H. A. (1989). The scientist as problem solver. Complex information processing: The impact of Herbert A. Simon, 375-398.

You can’t play 20 questions with nature and win

You can’t play 20 questions with nature and win” is the title of Allen Newell‘s 1973 paper, a classic in cognitive science. In the paper he confesses that although he sees many excellent psychology experiments, all making undeniable scientific contributions, he can’t imagine them cohering into progress for the field as a whole. He describes the state of psychology as focussed on individual phenomena – mental rotation, chunking in memory, subitizing, etc – studied in a way to resolve binary questions – issues such as nature vs nature, conscious vs unconscious, serial vs parallel processing.

There is, I submit, a view of the scientific endeavor that is implicit (and sometimes explicit) in the picture I have presented above. Science advances by playing twenty questions with nature. The proper tactic is to frame a general question, hopefully binary, that can be attacked experimentally. Having settled that bits-worth, one can proceed to the next. The policy appears optimal – one never risks much, there is feedback from nature at every step, and progress is inevitable. Unfortunately, the questions never seem to be really answered, the strategy does not seem to work.

As I considered the issues raised (single code versus multiple code, continuous versus discrete representation, etc.) I found myself conjuring up this model of the current scientific process in psychology- of phenomena to be explored and their explanation by essentially oppositional concepts. And I couldn’t convince myself that it would add up, even in thirty more years of trying, even if one had another 300 papers of similar, excellent ilk.

His diagnosis for one reason that phenomena can generate an endless excellent papers without endless progress is that people can do the same task in different ways. Lots of experiments dissect how people are doing the task, without constraining sufficiently the things Newell says are essential to predict behaviour (the person’s goals and the structure of the task environment), and thus providing no insight into the ultimate target of investigation, the invariant structure of the mind’s processing mechanisms. As a minimum, we must know the method participants are using, never averaging over different methods, he concludes. But this may not be enough:

That the same human subject can adopt many (radically different) methods for the same basic task, depending on goal, background knowledge, and minor details of payoff structure and task texture — all this — implies that the “normal” means of science may not suffice.

As a prognosis for how to make real progress in understanding the mind he proposes three possible courses of action:

  1. Develop complete processing models – i.e. simulations which are competent to perform the task and include a specification of the way in which different subfunctions (called ‘methods’ by Newell) are deployed.
  2. Analyse a complex task, completely, ‘to force studies into intimate relation with each other’, the idea being that giving a full account of a single task, any task, will force contradictions between theories of different aspects of the task into the open.
  3. ‘One program for many tasks’ – construct a general purpose system which can perform all mental tasks, in other words an artificial intelligence.

It was this last strategy which preoccupied a lot of Newell’s subsequent attention. He developed a general problem solving architecture he called SOAR, which he presented as a unified theory of cognition, and which he worked on until his death in 1992.

The paper is over forty years old, but still full of useful thoughts for anyone interested in the sciences of the mind.

Reference and link:
Newell, A. You can’t play 20 questions with nature and win: Projective comments on the papers of this symposium. in Chase, W. G. (Ed.). (1973). Visual Information Processing: Proceedings of the Eighth Annual Carnegie Symposium on Cognition, Held at the Carnegie-Mellon University, Pittsburgh, Pennsylvania, May 19, 1972. Academic Press.

See a nice picture of Newell from the Computer History Museum

Towards a nuanced view of mental distress

In the latest edition of The Psychologist I’m involved in a debate with John Cromby about whether our understanding of mental illness is mired in the past.

He thinks it is, I think it isn’t, and we kick off from there.

The article is readable online with a free registration but I’ve put the unrestricted version online as a pdf if you want to read it straight away.

Much of the debate is over the role of biological explanations in understanding mental distress which I think is widely understood by many.

Hopefully, amid the knockabout, the debate gets to clarify some of that.

Either way, I hope it raises a few useful reflections.

Link to ‘Are understandings of mental illness mired in the past?’ (free reg).
pdf of full debate.

The wrong sort of discussion

The Times Higher Education has an article on post-publication peer review, and whether it will survive legal challenges

The legal action launched by a US scientist who claims that anonymous comments questioning his science cost him a lucrative job offer has raised further questions about the potential for post-publication peer review to replace pre-publication review.

The article chimes with comments made by several prominent Psychologists who have been at the centre of controversies and decried the way their work has been discussed outside of the normal channels of the academic journals.

Earlier this year the head of a clinical trial of Tamiflu wrote to the British Medical Journal to protest that a BMJ journalist had solicited independent critique of the stats used in his work – “going beyond the reasonable response to a press release”.

John Bargh (Yale University) in his now infamous ‘nothing in their heads’ blogpost accused the open access journal PLoS of lacking “the usual high scientific journal standards of peer-review scrutiny”, and accussed Ed Yong – laughably – of “superficial online science journalism”. He concluded:

“I am not so much worried about the impact on science of essentially self-published failures to replicate as much as I’m worried about your ability to trust supposedly reputable online media sources for accurate information on psychological science.”

Simone Schnall (University of Cambridge) is a social psychologist whose work has also been at the centre of the discussion about replication (backstory, independent replication of her work recently reported). She has recently written that ‘no critical discussion is possible’ on social media, where ‘judgments are made quickly nowadays in social psychology and definitively’.

See also this comment from a scientist when a controversial paper which suggested that many correlations in fMRI studies of social psychological constructs were impossibly high was widely discussed before publication: . “I was shocked, this is not the way that scientific discourse should take place.”

The common theme is a lack of faith in the uncontrolled scientific discussion that now happens in public, before and after publication in the journal-sanctioned official record. Coupled, perhaps, with a lack of faith in other people to understand – let alone run – psychological research. Scientific discussion has always been uncontrolled, of course, the differences now are in how open the discussion is, and who takes part. Pre social media, ‘insider’ discussions of specialist topics took place inside psychology departments, and at conference dinners and other social gatherings of researchers. My optimistic take is that social media allows access to people who would not normally have it due to constraints on geography, finance or privilege. Social media means that if you’re in the wrong institution, aren’t funded, or if you have someone to look after at home that means you can’t fly to the conference, you can still experience and contribute to specialist discussions – that’s a massive and positive change and one we should protect as we work out how scientific discussion should take place in the 21st century.

Link: Simone Schnall’s comments in full: blog, video

Previously: Stafford, T., & Bell, V. (2012). Brain network: social media and the cognitive scientist. Trends in Cognitive Sciences, 16(10), 489–490. doi:10.1016/j.tics.2012.08.001

Previously What Jason Mitchell’s ‘On the emptiness of failed replications’ gets right, which includes some less optimistic notes on the current digital disruption of scholarly ways of working

Distraction effects

I’ve been puzzling over this tweet from Jeff Rouder:


Surely, I thought, psychology is built out of effects. What could be wrong with focussing on testing which ones are reliable?

But I think I’ve got it now. The thing about effects is that they show you – an experimental psychologist – can construct a situation where some factor you are interested in is important, relative to all the other factors (which you have managed to hold constant).

To see why this might be a problem, consider this paper by Tsay (2013): “Sight over sound in the judgment of music performance”. This was a study which asked people to select the winners of a classical music competition from 6 second clips of them performing. Some participants got the audio, so they could only hear the performance; others got the video, so they could only see the performance; and some got both audio and video. Only those participants who watched the video, without sound, could select the actual competition winners at above chance level. This demonstrates a significant bias effect of sight in judgements of music performance.

To understand the limited importance of this effect, contrast with the overclaims made by the paper: “people actually depend primarily on visual information when making judgments about music performance” (in the abstract) and “[Musicians] relegate the sound of music to the role of noise” (the concluding line). Contrary to these claims the study doesn’t show that looks dominate sound in how we assess music. It isn’t the case that our musical taste is mostly determined by how musicians look.

The Tsay studies took the 3 finalists from classical music competitions – the best of the best of expert musicians – and used brief clips of their performances as stimuli. By my reckoning, this scenario removes almost all differences in quality of the musical performance. Evidence in support for this is that Tsay didn’t find any difference in performance between non-expert participants and professional musicians. This fact strongly suggests that she has designed a task in which it is impossible to bring any musical knowledge to bear. musical knowledge isn’t an important factor.

This is why it isn’t reasonable to conclude that people are making judgments about musical performance in general. The clips don’t let you judge relative musical quality, but – for these equally almost equally matched performances – they do let you reflect the same biases as the judges, biases which include an influence of appearance as well as sound. The bias matters, not least because it obviously affects who won, but proving it exists is completely separate from the matter of whether the overall judgements of music, is affected more by sight or sound.

Further, there’s every reason to think that the conclusion from the study of the bias effect gives the opposite conclusion to the study of overall importance. In these experiments sight dominates sound, because differences due to sound have been controlled out. In most situations where we decide our music preferences, sounds is obviously massively more important.

Many psychological effects are impressive tribute to the skill of experimenters in designing situations where most factors are held equal, allowing us to highlight the role of subtle psychological factors. But we shouldn’t let this blind us to the fact that the existence of an effect due to a psychological factor isn’t the same as showing how important this factor is relative to all others, nor is it the same as showing that our effect will hold when all these other factors start varying.

Link: Are classical music competitions judged on looks? – critique of Tsay (2013) written for The Conversation

Link: A good twitter thread on the related issue of effect size – and yah-boo to anyone who says you can’t have a substantive discussion on social media

UPDATE: The paper does give evidence that the sound stimuli used do influence people’s judgements systemmatically – it was incorrect of me to say that differences due to sound have been removed. I have corrected the post to reflect what I believe the study shows: that differences due to sound have been minimised, so that differences in looks are emphasised.

Social psychology has lost its balance

Images by DeviantArt user bakablue08. Click for source.The New Yorker has an interesting article about a lack of political diversity in social psychology and how that may be leading to a climate of bias against conservative researchers, ideas and the evidence that might support them.

Some of the evidence for a bias against conservative thinking in social psychology goes back some years, and the article gives a good account of the empirical work as well as the debate.

However, the issue was recently raised again by morality researcher Jonathan Haidt leading to a renewed reflection on the extent of the problem.

There is a case to be made that, despite the imbalance, no formal changes need to be made, and that, on the whole, despite its problems, social psychology continues to function remarkably well and regularly produces high-quality research. Controversial work gets done. Even studies that directly challenge the field—like Haidt’s—are publicized and inspire healthy debate…

And yet the evidence for more substantial bias, against both individuals and research topics and directions, is hard to dismiss—and the hostility that some social psychologists have expressed toward the data suggests that self-correction may not be an adequate remedy.

A timely reminder of the eternal truth that bias is entirely non-partisan, and if you’ve not heard it before, a pointer to a great BBC Radio documentary that outlines how it works equally across people of every political stripe.

Link to ‘Is Social Psychology Biased Against Republicans?’