Radical embodied cognition: an interview with Andrew Wilson

adw_headshot_squareThe computational approach is the orthodoxy in psychological science. We try and understand the mind using the metaphors of information processing and the storage and retrieval of representations. These ideas are so common that it is easy to forget that there is any alternative. Andrew Wilson is on a mission to remind us that there is an alternative – a radical, non-representational, non-information processing take on what cognition is.

I sent him a few questions by email. After he answered these, and some follow up questions, we’ve both edited and agreed on the result, which you can read below.

 

Q1. Is it fair to say you are at odds with lots of psychology, theoretically? Can you outline why?

Psychology wants to understand what causes our behaviour. Cognitive psychology explanations are that behaviour is caused by internal states of the mind (or brain, if you like). These states are called mental representations, and they are models/simulations of the world that we use to figure out what to do and when to do it.

Cognitive psychology thinks we have representations because it assumes we have very poor sensory access to the world, e.g. vision supposedly begins with a patchy 2D image projected onto the back of the eye. We need these models to literally fill in the gaps by making an educated guess (‘inference’) about what caused those visual sensations.

My approach is called radical embodied cognitive psychology; ‘radical’ just means ‘no representations’. It is based on the work of James J Gibson. He was a perceptual psychologist who demonstrated that there is actually rich perceptual information about the world, and that we use this information. This is why perception and action are so amazingly successful most of the time, which is important because failures of perception have serious consequences for your health and wellbeing (e.g. falling on ice)

The most important consequence of this discovery is that when we have access to this information, we don’t need those internal models anymore. This then means that whatever the brain is doing, it’s not building models of the world in order to cause our behaviour. We are embedded in our environments and our behaviour is caused by the nature of that embedding (specifically, which information variables we are using for any given task).

So I ask very different questions than the typical psychologist: instead of ‘what mental model lets me solve this task?’ I ask ‘what information is there to support the observed behaviour and can I find evidence that we use it?’. When we get the right answer to the information question, we have great success in explaining and then predicting behaviour, which is actually the goal of psychology.

 

Q2. The idea that there are no mental representations is hard to get your head around. What about situations where behaviour seems to be based on things which aren’t there, like imagination, illusions or predictions?

First, saying that there are no mental representations is not saying that the brain is not up to something. This is a surprisingly common mistake, but I think it’s due to the fact cognitive psychologists have come to equate ‘brain activity’ with ‘representing’ and denying the latter means denying the former (see Is Embodied Cognition a No-Brainer?).

Illusions simply reveal how important it is to perception that we can move and explore. They are all based on a trick and they almost always require an Evil Psychologist™ lurking in the background. Specifically, illusions artificially restrict access to information so that the world looks like it’s doing one thing when it is really doing another. They only work if you don’t let people do anything to reveal the trick. Most visual illusions are revealed as such by exploring them, e.g by looking at them from a different perspective (e.g. the Ames Room).

Imagination and prediction are harder to talk about in this framework, but only because no one’s really tried. For what it’s worth, people are terrible at actively predicting things, and whatever imagination is it will be a side-effect of our ability to engage with the real world, not part of how we engage with the real world.

 

Q3. Is this radical approach really denying the reality of cognitive representations, or just using a different descriptive language in which they don’t figure? In other words, can you and the cognitivists both be right?

If the radical hypothesis is right, then a lot of cognitive theories will be wrong. Those theories all assume that information comes into the brain, is processed by representations and then output as behaviour. If we successfully replace representations with information, all those theories will be telling the wrong story. ‘Interacting with information’ is a completely different job description for the brain than ‘building models of the world’. This is another reason why it’s ‘radical’.

 

Q4. Even if I concede that you can think of the mind like this, can you convince me that I should? Why is it useful? What does this approach do for cognitive science that the conventional approach isn’t or cant’?

There are two reasons, I think. The first is empirical; this approach works very, very well. Whenever a researcher works through a problem using this approach, they find robust answers that stand up to extended scrutiny in the lab. These solutions then make novel predictions that also perform well  – examples are topics like the outfielder problem and the A-not-B error [see below for references]. Cognitive psychology is filled with small, difficult to replicate effects; this is actually a hint that we aren’t asking the right questions. Radical embodied cognitive science tends to produce large, robust and interpretable effects which I take as a hint that our questions are closer to the mark.

The second is theoretical. The major problem with representations is that it’s not clear where they get their content from. Representations supposedly encode knowledge about the world that we use to make inferences to support perception, etc. But if we have such poor perceptual contact with the world that we need representations, how did we ever get access to the knowledge we needed to encode? This grounding problem is a disaster. Radical embodiment solves it by never creating it in the first place – we are in excellent perceptual contact with our environments, so there are no gaps for representations to fill, therefore no representations that need content.

 

Q5. Who should we be reading to get an idea of this approach?

‘Beyond the Brain’ by Louise Barrett. It’s accessible and full of great stuff.

‘Radical Embodied Cognitive Science’ by Tony Chemero. It’s clear and well written but it’s pitched at trained scientists more than the generally interested lay person.

‘Embodied Cognition’ by Lawrence Shapiro that clearly lays out all the various flavours of ‘embodied cognition’. My work is the ‘replacement’ hypothesis.

‘The Ecological Approach to Visual Perception’ by James J Gibson is an absolute masterpiece and the culmination of all his empirical and theoretical work.

I run a blog at http://psychsciencenotes.blogspot.co.uk/ with Sabrina Golonka where we discuss all this a lot, and we tweet @PsychScientists. We’ve also published a few papers on this, the most relevant of which is ‘Embodied Cognition is Not What You Think It Is

 

Q6. And finally, can you point us to a few blog posts you’re proudest of which illustrate this way of looking at the world

What Else Could It Be? (where Sabrina looks at the question, what if the brain is not a computer?)

Mirror neurons, or, What’s the matter with neuroscience? (how the traditional model can get you into trouble)

Prospective Control – The Outfielder problem (an example of the kind of research questions we ask)

Fluctuating existence

The Neurologist has a fascinating case report of a women with Parkinson’s disease who experienced a fluctuating belief that she didn’t exist.

Cotard’s delusion is usually described as the ‘belief that you’re dead’ although Jules Cotard, for whom the delusion is named, defined it as a délire des négations – the delusion of negation, or nihilism, as it’s usually translated.

In fact, in his original case report, Cotard’s patient didn’t believe they were dead but that they had “no brain, nerves, chest, or entrails, and was just skin and bone”.

This new case report in The Neurologist describes a patient with Parkinson’s disease who experiences something similar with the delusion appearing as their Parkinson’s medication began to wear off.

In December 2010, she went to follow-up visit accompanied by her caregivers and they reported that, in the last 2 months, the patient has developed a sudden onset of nihilistic delusion, mainly during the “wearing-off” condition and associated with end of dose dyskinesias and akathisia. The patient repeatedly complained of having lost both of her eyes, mouth, nose, and ears. Often during these events, she insisted to have a mirror to see herself. She expressed the false belief that she did not have the whole body and that nothing existed, including herself, without any insight. This nihilistic delusion, compatible with the diagnosis of Cotard syndrome, clearly improved with the administration of the following dose of levodopa and the associated amelioration of motor symptoms.

This is interesting because the Parkinson’s medication – levodopa – is a precursor to dopamine and is used to increase dopamine levels in the brain.

Increased dopamine levels in mid-brain areas are considered to be a key causal factor in generating the delusions and hallucinations of psychosis, but in this case delusions reliably appeared as dopamine levels were likely to have been dropping due to the medication wearing off.

Although this is a single case study, the effect was reliable when repeated, but it doesn’t mean that this would happen with everybody in the same situation.

But what it really shows is that the neurobiology of psychosis is not a simple ‘chemical imbalance’ but, in part, a complex dysregulation that can effect individuals differently due to the inherent interconnectedness of neural systems.
 

Link to PubMed entry for case report.

Downsides of being a convincing liar

People who take shortcuts can trick themselves into believing they are smarter than they are, says Tom Stafford, and it comes back to bite them.

Honesty may be the best policy, but lying has its merits – even when we are deceiving ourselves. Numerous studies have shown that those who are practised in the art of self-deception might be more successful in the spheres of sport and business. They might even be happier than people who are always true to themselves. But is there ever a downside to believing our own lies?

An ingenious study by Zoe Chance of Yale University tested the idea, by watching what happens when people cheat on tests.

Chance and colleagues ran experiments which involved asking students to answer IQ and general knowledge questions. Half the participants were given a copy of the test paper which had – apparently in error – been printed with the answers listed at the bottom. This meant they had to resist the temptation to check or improve their answers against the real answers as they went along.

Irresistible shortcut

As you’d expect, some of these participants couldn’t help but cheat. Collectively, the group that had access to the answers performed better on the tests than participants who didn’t – even though both groups of participants were selected at random from students at the same university, so were, on average, of similar ability.  (We can’t know for sure who was cheating – probably some of the people who had answers would have got high scores even without the answers – but it means that the average performance in the group was partly down to individual smarts, and partly down to having the answers at hand.)

The crucial question for Chance’s research was this: did people in the “cheater” group know that they’d been relying on the answers? Or did they attribute their success in the tests solely to their own intelligence?

The way the researchers tested this was to ask the students to predict how well they’d do on a follow-up test. They were allowed to quickly glance over the second test sheet so that they could see that it involved the same kind of questions – and, importantly, that no answers had been mistakenly been printed at the bottom this time. The researchers reasoned that if the students who had cheated realised that cheating wasn’t an option the second time around, they should predict they wouldn’t do as well on this second test.

Not so. Self-deception won the day. The people who’d had access to the answers predicted, on average, that they’d get higher scores on the follow-up – equivalent to giving them something like a 10-point IQ boost. When tested, of course, they scored far lower.

The researchers ran another experiment to check that the effect was really due to the cheaters’ inflated belief in their own abilities. In this experiment, students were offered a cash reward for accurately predicting their scores on the second test. Sure enough, those who had been given the opportunity to cheat overestimated their ability and lost out – earning 20% less than the other students.

The implication is that people in Chance’s experiment – people very much like you and me – had tricked themselves into believing they were smarter than they were. There may be benefits from doing this – confidence, satisfaction, or more easily gaining the trust of others – but there are also certainly disadvantages. Whenever circumstances change and you need to accurately predict how well you’ll do, it can cost to believe you’re better than you are.

That self-deception has its costs has some interesting implications. Morally, most of us would say that self-deception is wrong. But aside from whether self-deception is undesirable, we should expect it to be present in all of us to some degree (because of the benefits), but to be limited as well (because of the costs).

Self-deception isn’t something that is always better in larger doses – there must be an amount of it for which the benefits outweigh the costs, most of the time. We’re probably all self-deceiving to some degree. The irony being, because it is self-deception, we can’t know how often.

This is my BBC Future article from last week. The original is here

The scientist as problem solver

97px-Herbert_simon_red_completeStart the week with one of the founding fathers of cognitive science: in ‘The scientist as problem solver‘, Herb Simon (1916-2001) gives a short retrospective of his scientific career.

To tell the story of the research he has done, he advances a thesis: “The Scientist is a problem solver. If the thesis is true, then we can dispense with a theory of scientific discovery – the processes of discovery are just applications of the processes of problem solving.”. Quite aside from the usefulness of this perspective, the paper is an reminder of intoxicating possibility of integration across the physical, biological and social sciences: Simon worked on economics, management theory, complex systems and artificial intelligence as well as what we’d call now cognitive psychology.

He uses his own work on designing problem solving algorithms to reflect on how he – and other scientists – can and should make scientific progress. Towards the end he expresses what would be regarded as heresy in many experimentally orientated psychology departments. He suggests that many of his most productive investigations lacked a contrast between experimental and control conditions. Did this mean they were worthless, he asks. No:

…You can test theoretical models without contrasting an experimental with a control condition. And apart from testing models, you can often make surprising observations that give you ideas for new or improved models…

Perhaps it is not our methodology that needs revising so much as the standard textbook methodology, which perversely warns us against running an experiment until precise hypotheses have been formulated and experimental and control conditions defined. How do such experiments ever create surprise – not just the all-too-common surprise of having our hypotheses refuted by facts, but the delight-provoking surprise of encountering a wholly unexpected phenomenon? Perhaps we need to add to the textbooks a chapter, or several chapters, describing how basic scientific discoveries can be made by observing the world intently, in the laboratory or outside it, with controls or without them, heavy with hypotheses or innocent of them.

REFERENCE
Simon, H. A. (1989). The scientist as problem solver. Complex information processing: The impact of Herbert A. Simon, 375-398.

Actually, still no good explanation of ‘that dress’

The last time I almost went blind staring at “that dress” was thanks to Liz Hurley and on this occasion I find myself equally unsatisfied.

I’ll spare you the introduction about the amazing blue/black or white/gold dress. But what’s left me rather disappointed are the numerous ‘science of the dress’ articles that have appeared everywhere and say they’ve explained the effect through colour constancy.

Firstly, this doesn’t explain what we want to know – which is why people differ in their perceptions, and secondly, I don’t think colour constancy is a good explanation on its own.

To explain a little, colour constancy is an effect of the human visual system where colours are perceived as being different depending on their context as the brain adjusts for things like assumed lighting and surroundings. Here’s a good and topical example from XKCD. The dress colours are the same in both pictures but the seem different because the background colour is different.

An important feature of the visual system is that the experience of colour is not a direct result of the wavelength of the light being emitted by the surface. The brain modifies the experiences to try and ensure that things appear the same colour in different lighting because if we just went off wavelength everything would wildly change colour as it moved through a world which is lit unevenly and has different colour light sources.

Visual illusions take advantange of this and there are plenty of examples where you can see that even completely physically identical colours can be perceived as markedly different shades if the image suggests one is in shadow and the other in direct light, for example.

Firstly, this isn’t an explanation of why people differ in perceiving the dress. In fact, all of the ‘science explanations’ have simply recounted how perceived colours can change but not the most important thing which is why people are having two stable but contradictory experiences.

Colour constancy works on everyone with normal colour vision. If you take the panels from the XKCD cartoon, people don’t markedly disagree about what the perceived colours are. The effect of each image is very reliable between individuals.

That’s not the case with the dress. Also, if you say context makes a difference, changing the surroundings of the dress should change the colours. It doesn’t.

Some have argued that individual assumptions about lighting in the picture are what’s making the difference. In other words, the context is the unconscious assumptions people make about lighting in the picture.

But if this is the case, this still isn’t an explanation because it doesn’t tell us why people have different assumptions. Psychologists called these top-down effects or, if we’re going to get Bayesian, perceptual priors.

75% of people in this BuzzFeed poll said they saw white/gold, 25% said they saw blue/black, and a small minority of people say they’ve seen the picture ‘flip’ between the two perceptions. How come?

And there’s actually a good test of the colour constancy or any other other ‘implicit interpretation’ explanation. You should be able to create images that alter the visual system’s assumptions and make perception of the dress reliably flip between white/gold and blue/black, as with the XKCD cartoon.

So, any vision scientists out there who can come up with a good explanation of why people differ in their perceptions? Psychophysicists, have I gone wildly off track?

Spike activity 28-02-2015

Quick links from the past week in mind and brain news:

Nautilus magazine has a good piece on behavioural economics and rethinking ‘nudges’. Although the rethink is really just another form of standard ‘nudge’.

The biggest hedge fund in the world, the $165 billion Bridgewater, starts an AI team to help give it the edge on investments reports Bloomberg. Well, they couldn’t get much worse than humans.

Gizmodo reports that a neuroscientist says he’ll do a head transplant ‘real soon now’. Hungover neuroscientist reads Gizmodo, thinks ‘I said what!?!’

The UK’s Post Office head of marketing has clearly been taken in by neuromarketing who thinks it will help them “better understand” their customers. Just like the stamp while we scan your brain…

The New York Times reports on pharma company Shire doing the old ‘disease marketing by the way I have a pill for that’ trick with DSM-5 newcomer binge eating disorder.

Hard Feelings: Science’s Struggle to Define Emotions. Good piece in The Atlantic.

The Human Brain Project is to be reorganised after a bit of a fuss (Americans: a significant crisis).

Being a asshole boss is bad for team performance. Interesting piece in Harvard Business Review.

The smart unconscious

We feel that we are in control when our brains figure out puzzles or read words, says Tom Stafford, but a new experiment shows just how much work is going on underneath the surface of our conscious minds.

It is a common misconception that we know our own minds. As I move around the world, walking and talking, I experience myself thinking thoughts. “What shall I have for lunch?”, I ask myself. Or I think, “I wonder why she did that?” and try and figure it out. It is natural to assume that this experience of myself is a complete report of my mind. It is natural, but wrong.

There’s an under-mind, all psychologists agree – an unconscious which does a lot of the heavy lifting in the process of thinking. If I ask myself what is the capital of France the answer just comes to mind – Paris! If I decide to wiggle my fingers, they move back and forth in a complex pattern that I didn’t consciously prepare, but which was delivered for my use by the unconscious.

The big debate in psychology is exactly what is done by the unconscious, and what requires conscious thought. Or to use the title of a notable paper on the topic, ‘Is the unconscious smart or dumb?‘ One popular view is that the unconscious can prepare simple stimulus-response actions, deliver basic facts, recognise objects and carry out practised movements. Complex cognition involving planning, logical reasoning and combining ideas, on the other hand, requires conscious thought.

A recent experiment by a team from Israel scores points against this position. Ran Hassin and colleagues used a neat visual trick called Continuous Flash Suppression to put information into participants’ minds without them becoming consciously aware of it. It might sound painful, but in reality it’s actually quite simple. The technique takes advantage of the fact that we have two eyes and our brain usually attempts to fuse the two resulting images into a single coherent view of the world. Continuous Flash Suppression uses light-bending glasses to show people different images in each eye. One eye gets a rapid succession of brightly coloured squares which are so distracting that when genuine information is presented to the other eye, the person is not immediately consciously aware of it. In fact, it can take several seconds for something that is in theory perfectly visible to reach awareness (unless you close one eye to cut out the flashing squares, then you can see the ‘suppressed’ image immediately).

Hassin’s key experiment involved presenting arithmetic questions unconsciously. The questions would be things like “9 – 3 – 4 = ” and they would be followed by the presentation, fully visible, of a target number that the participants were asked to read aloud as quickly as possible. The target number could either be the right answer to the arithmetic question (so, in this case, “2”) or a wrong answer (for instance, “1”). The amazing result is that participants were significantly quicker to read the target number if it was the right answer rather than a wrong one. This shows that the equation had been processed and solved by their minds – even though they had no conscious awareness of it – meaning they were primed to read the right answer quicker than the wrong one.

The result suggests that the unconscious mind has more sophisticated capacities than many have thought. Unlike other tests of non-conscious processing, this wasn’t an automatic response to a stimulus – it required a precise answer following the rules of arithmetic, which you might have assumed would only come with deliberation. The report calls the technique used “a game changer in the study of the unconscious”, arguing that “unconscious processes can perform every fundamental, basic-level function that conscious processes can perform”.

These are strong claims, and the authors acknowledge that there is much work to do as we start to explore the power and reach of our unconscious minds. Like icebergs, most of the operation of our minds remains out of sight. Experiments like this give a glimpse below the surface.

This is my BBC Future column from last week. The original is here

Spike activity 20-02-2015

Quick links from the past week in mind and brain news:

Interesting social mapping using subway journey data from Beijing reproted in New Scientist.

BPS Research Digest has compiled a comprehensive list of mind, brain and behaviour podcasts.

That study finding a surge of p values just below 0.05 in psychology, probably not a sign of bad science, reports Daniel Lakens with a new analysis.

The Toronto Star reports that psychologists terminated a study on implanting false crime memories early due to over-effectiveness.

Why do mirrors seem to reverse left and right but not up or down? Clear explanation in a great video from Physics Girl.

Vice has an interesting piece on public reactions to celebrities who become psychotic or begin to display unusual behaviour.

Science News has a map of ambient noisyness is America.

There’s an interesting interview with Facebook AI director Yann LeCun in IEEE Spectrum magazine.

Anti-vax: wrong but not irrational

badge

Since the uptick in outbreaks of measles in the US, those arguing for the right not to vaccinate their children have come under increasing scrutiny. There is no journal of “anti-vax psychology” reporting research on those who advocate what seems like a controversial, “anti-science” and dangerous position, but if there was we can take a good guess at what the research reported therein would say.

Look at other groups who hold beliefs at odds with conventional scientific thought. Climate sceptics for example. You might think that climate sceptics would be likely to be more ignorant of science than those who accept the consensus that humans are causing a global increase in temperatures. But you’d be wrong. The individuals with the highest degree of scientific literacy are not those most concerned about climate change, they are the group which is most divided over the issue. The most scientifically literate are also some of the strongest climate sceptics.

A driver of this is a process psychologists have called “biased assimilation” – we all regard new information in the light of what we already believe. In line with this, one study showed that climate sceptics rated newspaper editorials supporting the reality of climate change as less persuasive and less reliable than non-sceptics. Some studies have even shown that people can react to information which is meant to persuade them out of their beliefs by becoming more hardline – the exact opposite of the persuasive intent.

For topics such as climate change or vaccine safety, this can mean that a little scientific education gives you more ways of disagreeing with new information that don’t fit your existing beliefs. So we shouldn’t expect anti-vaxxers to be easily converted by throwing scientific facts about vaccination at them. They are likely to have their own interpretation of the facts.

High trust, low expertise

Some of my own research has looked at who the public trusted to inform them about the risks from pollution. Our finding was that how expert a particular group of people was perceived to be – government, scientists or journalists, say – was a poor predictor of how much they were trusted on the issue. Instead, what was critical was how much they were perceived to have the public’s interests at heart. Groups of people who were perceived to want to act in line with our respondents’ best interests – such as friends and family – were highly trusted, even if their expertise on the issue of pollution was judged as poor.

By implication, we might expect anti-vaxxers to have friends who are also anti-vaxxers (and so reinforce their mistaken beliefs) and to correspondingly have a low belief that pro-vaccine messengers such as scientists, government agencies and journalists have their best interests at heart. The corollary is that no amount of information from these sources – and no matter how persuasive to you and me – will convert anti-vaxxers who have different beliefs about how trustworthy the medical establishment is.

Interestingly, research done by Brendan Nyhan has shown many anti-vaxxers are willing to drop mistaken beliefs about vaccines, but as they do so they also harden in their intentions not to get their kids vaccinated. This shows that the scientific beliefs of people who oppose vaccinations are only part of the issue – facts alone, even if believed, aren’t enough to change people’s views.

Reinforced memories

We know from research on persuasion that mistaken beliefs aren’t easily debunked. Not only is the biased assimilation effect at work here but also the fragility of memory – attempts at debunking myths can serve to reinforce the memory of the myth while the debunking gets forgotten.

The vaccination issue provides a sobering example of this. A single discredited study from 1998 claimed a link between autism and the MMR jab, fuelling the recent distrust of vaccines. No matter how many times we repeat that “the MMR vaccine doesn’t cause autism”, the link between the two is reinforced in people’s perceptions. To avoid reinforcing a myth, you need to provide a plausible alternative – the obvious one here is to replace the negative message “MMR vaccine doesn’t cause autism”, with a positive one. Perhaps “the MMR vaccine protects your child from dangerous diseases”.

Rational selfishness

There are other psychological factors at play in the decisions taken by individual parents not to vaccinate their children. One is the rational selfishness of avoiding risk, or even the discomfort of a momentary jab, by gambling that the herd immunity of everyone else will be enough to protect your child.

Another is our tendency to underplay rare events in our calculation about risks – ironically the very success of vaccination programmes makes the diseases they protect us against rare, meaning that most of us don’t have direct experience of the negative consequences of not vaccinating. Finally, we know that people feel differently about errors of action compared to errors of inaction, even if the consequences are the same.

Many who seek to persuade anti-vaxxers view the issue as a simple one of scientific education. Anti-vaxxers have mistaken the basic facts, the argument goes, so they need to be corrected. This is likely to be ineffective. Anti-vaxxers may be wrong, but don’t call them irrational.

Rather than lacking scientific facts, they lack a trust in the establishments which produce and disseminate science. If you meet an anti-vaxxer, you might have more luck persuading them by trying to explain how you think science works and why you’ve put your trust in what you’ve been told, rather than dismissing their beliefs as irrational.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Oliver Sacks: “now I am face to face with dying”

In a moving and defiant article for the The New York Times, neurologist Oliver Sacks has announced he has terminal cancer.

Over the last few days, I have been able to see my life as from a great altitude, as a sort of landscape, and with a deepening sense of the connection of all its parts. This does not mean I am finished with life.

On the contrary, I feel intensely alive, and I want and hope in the time that remains to deepen my friendships, to say farewell to those I love, to write more, to travel if I have the strength, to achieve new levels of understanding and insight.

This will involve audacity, clarity and plain speaking; trying to straighten my accounts with the world. But there will be time, too, for some fun (and even some silliness, as well).

The whole piece is a reflection on life, death and living and, fittingly, is a joy to read.

Keep on keepin’ on Dr Sacks.

We look forward to hearing about your final adventures.
 

Link to ‘My Own Life: Oliver Sacks on Learning He Has Terminal Cancer’.

Half a century of neuroscience

The Lancet has a good retrospective looking back on the last 50 years of neuroscience, which in some ways, was when the field was born.

Of course, the brain and nervous system has been the subject of study for hundreds, if not thousands, of years but the concept of a dedicated ‘neuroscience’ is relatively new.

The term ‘neuroscience’ was first used in 1962 by biologist Francis Schmitt who previously referred to the integrated study of mind, brain and behaviour by the somewhat less catchy title “biophysics of the mind”. The first undergraduate degree in neuroscience was offered by Amherst College only in 1973.

The Lancet article, by one of the first generation ‘neuroscientists’ Steven Rose, looks back at how the discipline began in the UK (in a pub, as most things do) and then widens his scope to review how neuroscience has transformed over the last 50 years.

But many of the problems that had beset the early days remain unresolved. Neuroscience may be a singular label, but it embraces a plurality of disciplines. Molecular and cognitive neuroscientists still scarcely speak a common language, and for all the outpouring of data from the huge industry that neuroscience has become, Schmitt’s hoped for bridging theories are still in short supply. At what biological level are mental functions to be understood? For many of the former, reductionism rules and the collapse of mind into brain is rarely challenged—there is even a society for “molecular and cellular cognition”—an elision hardly likely to appeal to the cognitivists who regard higher order mental functions as emergent properties of the brain as a system.

It’s an interesting reflection on how neuroscience has changed over its brief lifespan from one of the people who were there at the start.
 

Link to ’50 years of neuroscience’ in The Lancet.

Spike activity 13-02-2015

Quick links from the past week in mind and brain news:

US Governor proposes that welfare recipients should be drug screened and have negative results as a condition for a payment. A fascinating Washington Post piece looks at past data on similar schemes.

BPS Research Digest launches the PsychCrunch podcast. First episode: evidence-based dating.

The brain, interrupted: neurodevelopment and the pre-term baby. Excellent Nature piece.

Fusion has a great piece on *how* we should worry about artificial intelligence.

“The world’s first hotel staffed entirely by robots is set to open in Japan” reports the International Business Times. Clearly they’ve never visited a Travelodge.

Forbes reports on the ‘coming boom in brain medicines’. Personally, I won’t be holding my breath.

There’s an excellent update on new psychoactive substances and synthetics drugs over at Addiction Inbox.

The Scientific 23 is a great site that interviews scientists and there are lots of cognitive scientists discussing their work.

You can’t play 20 questions with nature and win

You can’t play 20 questions with nature and win” is the title of Allen Newell‘s 1973 paper, a classic in cognitive science. In the paper he confesses that although he sees many excellent psychology experiments, all making undeniable scientific contributions, he can’t imagine them cohering into progress for the field as a whole. He describes the state of psychology as focussed on individual phenomena – mental rotation, chunking in memory, subitizing, etc – studied in a way to resolve binary questions – issues such as nature vs nature, conscious vs unconscious, serial vs parallel processing.

There is, I submit, a view of the scientific endeavor that is implicit (and sometimes explicit) in the picture I have presented above. Science advances by playing twenty questions with nature. The proper tactic is to frame a general question, hopefully binary, that can be attacked experimentally. Having settled that bits-worth, one can proceed to the next. The policy appears optimal – one never risks much, there is feedback from nature at every step, and progress is inevitable. Unfortunately, the questions never seem to be really answered, the strategy does not seem to work.

As I considered the issues raised (single code versus multiple code, continuous versus discrete representation, etc.) I found myself conjuring up this model of the current scientific process in psychology- of phenomena to be explored and their explanation by essentially oppositional concepts. And I couldn’t convince myself that it would add up, even in thirty more years of trying, even if one had another 300 papers of similar, excellent ilk.

His diagnosis for one reason that phenomena can generate an endless excellent papers without endless progress is that people can do the same task in different ways. Lots of experiments dissect how people are doing the task, without constraining sufficiently the things Newell says are essential to predict behaviour (the person’s goals and the structure of the task environment), and thus providing no insight into the ultimate target of investigation, the invariant structure of the mind’s processing mechanisms. As a minimum, we must know the method participants are using, never averaging over different methods, he concludes. But this may not be enough:

That the same human subject can adopt many (radically different) methods for the same basic task, depending on goal, background knowledge, and minor details of payoff structure and task texture\u2014all this\u2014 implies that the “normal” means of science may not suffice.

As a prognosis for how to make real progress in understanding the mind he proposes three possible courses of action:

  1. Develop complete processing models – i.e. simulations which are competent to perform the task and include a specification of the way in which different subfunctions (called ‘methods’ by Newell) are deployed.
  2. Analyse a complex task, completely, ‘to force studies into intimate relation with each other’, the idea being that giving a full account of a single task, any task, will force contradictions between theories of different aspects of the task into the open.
  3. ‘One program for many task’ – construct a general purpose system which can perform all mental tasks, in other words an artificial intelligence.

It was this last strategy which preoccupied a lot of Newell’s subsequent attention. He developed a general problem solving architecture he called SOAR, which he presented as a unified theory of cognition, and which he worked on until his death in 1992.

The paper is over forty years old, but still full of useful thoughts for anyone interested in the sciences of the mind.

Reference and link:
Newell, A. You can’t play 20 questions with nature and win: Projective comments on the papers of this symposium. in Chase, W. G. (Ed.). (1973). Visual Information Processing: Proceedings of the Eighth Annual Carnegie Symposium on Cognition, Held at the Carnegie-Mellon University, Pittsburgh, Pennsylvania, May 19, 1972. Academic Press.

See a nice picture of Newell from the Computer History Museum

What gambling monkeys teach us about human rationality

We often make stupid choices when gambling, says Tom Stafford, but if you look at how monkeys act in the same situation, maybe there’s good reason.

When we gamble, something odd and seemingly irrational happens.

It’s called the ‘hot hand’ fallacy – a belief that your luck comes in streaks – and it can lose you a lot of money. Win on roulette and your chances of winning again aren’t more or less – they stay exactly the same. But something in human psychology resists this fact, and people often place money on the premise that streaks of luck will continue – the so called ‘hot hand’.

The opposite superstition is to bet that a streak has to end, in the false belief that independent events of chance must somehow even out. This is known as the gambler’s fallacy, and achieved notoriety at the Casino de Monte-Carlo on 18 August 1913. The ball fell on black 26 times in a row, and as the streak lengthened gamblers lost millions betting on red, believing that the chances changed with the length of the run of blacks.

Why do people act this way time and time again? We can discover intriguing insights, it seems, by recruiting monkeys and getting them to gamble too. If these animals make dumb choices like us, perhaps it could tell us more about ourselves.

First though, let’s look at what makes some games particularly likely to trigger these effects. Many results in games are based on a skill element, so it makes reasonable sense to bet, for instance, that a top striker like Lionel Messi is more likely to score a goal than a low-scoring defender.

Yet plenty of games contain randomness. For truly random events like roulette or the lottery, there is no force which makes clumps more or less likely to continue. Consider coin tosses: if you have tossed 10 heads in a row your chance of throwing another heads is still 50:50 (although, of course, at the point before you’ve thrown any, the overall odds of throwing 10 in a row is still minuscule).

The hot hand and gambler’s fallacies both show that we tend to have an unreasonable faith in the non-randomness of the universe, as if we can’t quite believe that those coins (or roulette wheels, or playing cards) really are due to the same chances on each flip, spin or deal.

It’s a result that sometimes makes us sneer at the irrationality of human psychology. But that conclusion may need revising.

Cross-species gambling

An experiment reported by Tommy Blanchard of the University of Rochester in New York State, and colleagues, shows that monkeys playing a gambling game are swayed by the same hot hand bias as humans. Their experiments involved three monkeys controlling a computer display with their eye-movements – indicating their choices by shifting their gaze left or right. In the experiment they were given two options, only one of which delivered a reward. When the correct option was random – the same 50:50 chance as a coin flip – the monkeys still had a tendency to select the previously winning option, as if luck should continue, clumping together in streaks.

The reason the result is so interesting is that monkeys aren’t taught probability theory as school. They never learn theories of randomness, or pick up complex ideas about chance events. The monkey’s choices must be based on some more primitive instincts about how the world works – they can’t be displaying irrational beliefs about probability, because they cannot have false beliefs, in the way humans can, about how luck works. Yet they show the same bias.

What’s going on, the researchers argue, is that it’s usually beneficial to behave in this manner. In most of life, chains of success or failure are linked for good reason – some days you really do have your eye on your tennis serve, or everything goes wrong with your car on the same day because the mechanics of the parts are connected. In these cases, the events reflect an underlying reality, and one you can take advantage of to predict what happens next. An example that works well for the monkeys is food. Finding high-value morsels like ripe food is a chance event, but also one where each instance isn’t independent. If you find one fruit on a tree the chances are that you’ll find more.

The wider lesson for students of human nature is that we shouldn’t be quick to call behaviours irrational. Sure, belief in the hot hand might make you bet wrong on a series of coin flips, or worse, lose a pot of money. But it may be that across the timespan in evolution, thinking that luck comes in clumps turned out to be useful more often than it was harmful.

This is my BBC Future article from last week. The original is here

A refocus of military influence

The British media has been covering the creation of 77th Brigade, or ‘Chindits’ in the UK Army which they’ve wrongly described as PsyOps ‘Twitter troops’. The renaming is new but the plan for a significant restructuring and expansion of the UK military’s influence operations is not.

The change in focus has been prompted by a growing realisation that the success of security strategy depends as much on influencing populations at home and abroad as it does through military force.

The creation of a new military structure, designed to tackle exactly this problem, was actually reported last year in British Army 2014 – a glossy annual policy publication. The latest announcement of the 77th Brigade is really just a media-friendly re-branding of the existing plan.

You can read the document online (warning it’s a 50Mb plus pdf) but here’s a crucial section from page 121 onwards:

Our potential adversaries and partners are increasingly blurring the lines between regular and irregular and between military, political, economic and information activities. At least three nations who operate large conventional ‘traditional’ armies have now also adopted the Chinese concept of Unrestricted Warfare.

Author Steve Metz describes this as involving “diverse, simultaneous attacks on an adversary’s social, economic and political systems. It ignores and transcends the ‘boundaries the boundaries between what is a weapon and what is not, between soldier and non-combatant, between state and non-state or suprastate.” If we wish to succeed in such as environment we need to compete on an equal footing.

To do this, we must change not only our physical capabilities but our conceptual approach, our planning and our execution. This is not to say that the virtual and cognitive domains now produce a ‘silver bullet’ that will mean the end of combat, but that “superiority in the physical environment was of little value unless it could be translated into an advantage in the information environment”…

In order to shift the Army’s thinking in the approach to this new manoeuvre, the Security Assistance Group (SAG) will form in September 2014. It will form through the amalgamation of the current 15 Psychological Operations Group, the Military Stabilisation Support Group, the Media Operations Group and the Security Capacity Team.

However, these structures are merely the start point for a fully integrated capability that will harness a wide range of powers to achieve the desired effects – from cyber through to engagement, commercial, financial, stabilisation and deception. At the heart of the new structure must be a culture and attitude that is both Defence and civilian orientated.

And that is really what the ‘newly announced’ 77th Brigade is all about.

To see how seriously the British Army are taking this, the 77th is reportedly going to be made up of up to 2,000 full-time and reserve troops. Think Defence report that the combined strength of all the existing relevant groups that will be incorporated is just 300 people.

The idea is to make Information Operations a much more central part of military doctrine. This includes electronic warfare and computer hacking, physical force targeted on information resources (like taking out infrastructure), psychological operations – traditionally focused on changing belief and behaviour in the theatre of war, media operations – essentially corporate PR, and a wider use of media to influence external populations and potential adversaries.

The Daily Express reports that “the brigade will bring together specialists in media, signalling and psychological operations, with some Special Forces soldiers and possibly computer hackers” which seems likely to reflect exactly what the Army are aiming for in their new plan.

From this point of view, you can see why governments are so keen to hold on to their Snowden-era digital monitoring and intervention capabilities.

They typically justify their existence in terms of ‘breaking terrorist networks’ but they are equally as useful for their role in wider information operations – targeting groups rather than individuals – now considered key to national security.

The formation of the 77th Brigade is a mostly reflection of a wider refiguring of global conflict that puts cognition and behaviour at the centre of political objectives.

It is simultaneously more and less democratic that ‘hard power’. It makes the battle of ideas, rather than the use of force, central to determining political outcome but attempts to shape the information environment so some ideas become more equal than others.

Spike activity 30-01-2015

Quick links from the past week in mind and brain news:

PLOS Neuroscience has an excellent interview on the strengths and limitations of fMRI.

There’s an excellent profile of clinical psychologist Andrea Letamendi and her interest in comics and mental health in The Atlantic.

The Wall Street Journal has an excellent piece on hikikomori – a syndrome of ultra withdrawal by Japanese youth.

The Hearing Voices Network as an alternative approach to supporting voice hearers is covered by a good article in The Independent.

Backchannel looks at the largest ‘virtual psychology lab’ in the world.

Does subliminal advertising actually work? asks BBC News.

BPS Research Digest covers a study finding that psychologists and psychiatrists rate patients less positively when their problems are explained biologically. Along the lines of several similar studies.

Follow

Get every new post delivered to your Inbox.

Join 25,596 other followers