How is the brain relevant in mental disorder?

The Psychologist has a fascinating article on how neuroscience fits in to our understanding of mental illness and what practical benefit brain science has – in lieu of the fact that it currently doesn’t really help us a great deal in the clinic.

It is full of useful ways of thinking about how neuroscience fits into our view of mental distress.

The following is a really crucial section, that talks about the difference between proximal (closer) and distal (more distant) causes.

In essence, rather than talking about causes we’re probably better off talking about causal pathways – chains of events that can lead to a problem – which can include common elements but different people can arrive at the same difficulty in different ways.

A useful notion is to consider different types of causes of symptoms lying on a spectrum, the extremes of which I will term ‘proximal’ and ‘distal’. Proximal causes are directly related to the mechanisms driving symptoms, and are useful targets for treatment; they are often identified through basic science research. For example, lung cancer is (proximally) caused by malfunction in the machinery that regulates cell division. Traditional lung cancer treatments tackle this cause by removing the malfunctioning cells (surgery) or killing them (standard chemotherapy and radiotherapy)…

By contrast, distal causes are indirectly related to the mechanisms driving symptoms, and are useful targets for prevention; they are often identified through epidemiology research. Again, take the example of lung cancer, which is (distally) caused by cigarette smoking in the majority of cases, though it must be caused by other factors in people who have never smoked. These could be genetic (lung cancer is heritable), other types of environmental trigger (e.g. radon gas exposure) or some interaction between the two. Given the overwhelming evidence that lung cancer is (distally) caused by smoking, efforts at prevention rightly focus on reducing its incidence. However, after a tumour has developed an oncologist must focus on the proximal cause when proposing a course of treatment…

The majority of studies of depression have focused on distal causes (which psychologists might consider ‘underlying’). These include: heritability and genetics; hormonal and immune factors; upbringing and early life experience; and personality. More proximal causes include: various forms of stress, particularly social; high-level psychological constructs derived from cognitive theories (e.g. dysfunctional negative schemata); low-level constructs such as negative information processing biases (also important in anxiety); and disrupted transmission in neurotransmitter systems such as serotonin.

It’s not a light read, but it is well worth diving into it for a more in-depth treatment of the brain and mental illness.
 

Link to Psychologist article neuroscience and mental health.

Mind Hacks excerpts x 2

This month, Business Insider have republished a couple of chapters from Mind Hacks the book (in case you missed it, back before the blog, Mind Hacks was a book, 101 do-it-at-home psychology experiences). The excerpts are:

1. Why one of these puzzles is easy and the other is hard – which is about the Wason Selection Task, a famous example of how our ability to reason logically can be confounded (and unconfounded if you find the right format to present a problem in).

2. Why this sentence is hard to understand – which shows you how to improve your writing with a bit of elementary psychology (hint: it is about reducing working memory load). Steven Pinker covers the same advice in his new book The Sense of Style (2014).

Both excerpts show off some of the neat illustrations done for the book, as well as being a personal nostalgia trip for yours truly (it’s been ten years!)

Links: Why this sentence is hard to understand + Why one of these puzzles is easy and the other is hard

Trauma is more complex than we think

I’ve got an article in The Observer about how the official definition of trauma keeps changing and how the concept is discussed as if it were entirely intuitive and clear-cut, when it’s actually much more complex.

I’ve become fascinated by how the concept of ‘trauma’ is used in public debate about mental health and the tension that arises between the clinical and rhetorical meanings of trauma.

One unresolved issue, which tests mental health professionals to this day, is whether ‘traumatic’ should be defined in terms of events or reactions.

Some of the confusion arises when we talk about “being traumatised”. Let’s take a typically horrifying experience – being caught in a war zone as a civilian. This is often described as a traumatic experience, but we know that most people who experience the horrors of war won’t develop post-traumatic stress disorder or PTSD – the diagnosis designed to capture the modern meaning of trauma. Despite the fact that these sorts of awful experiences increase the chances of acquiring a range of mental health problems – depression is actually a more common outcome than PTSD – it is still the case that most people won’t develop them. Have you experienced trauma if you have no recognisable “scar in the psyche”? This is where the concept starts to become fuzzy.

We have the official diagnosis of posttraumatic stress disorder or PTSD but actually lots of mental health problems can appear after awful events, and yet there is no ‘posttraumatic depression’ or ‘posttraumatic social phobia’ diagnoses.

To be clear, it’s not that trauma doesn’t exist but that it’s less fully developed as a concept than people think and, as a result, often over-simplified during debates.

Full article at the link below.
 

Link to Observer article on the shifting sands of trauma.

Spike activity 06-03-2015

Quick links from the past week in mind and brain news:

The strange world of felt presences. Great piece in The Guardian.

Nature reports that the Human Brain Project has voted for a change of leadership. But read carefully, it’s not clear how much will change in practice.

Surely the worst ‘neuroscience of’ article ever written? “The Neuroscience of ISIS” from The Daily Beast. Ruthlessly, it’s the first in a series.

Project Syndicate on why social science needs to be on the front-line of the fight against drug-resistant diseases.

Psychiatry is More Complex than Either its Proponents or its Critics Seem Able to Admit. Insightful piece on Mental Health Chat.

iDigitalTimes on what DeepMind’s computer game playing AI tells us where artificial intelligence falls short.

No link found between psychosis and use of ‘classic’ psychedelics LSD, psilocybin and mescaline in two large studies, reports Nature.

Beautiful online exhibition of the work of surreal optical illusion photographer Erik Johansson over at Twisted Sifter.

Radical embodied cognition: an interview with Andrew Wilson

adw_headshot_squareThe computational approach is the orthodoxy in psychological science. We try and understand the mind using the metaphors of information processing and the storage and retrieval of representations. These ideas are so common that it is easy to forget that there is any alternative. Andrew Wilson is on a mission to remind us that there is an alternative – a radical, non-representational, non-information processing take on what cognition is.

I sent him a few questions by email. After he answered these, and some follow up questions, we’ve both edited and agreed on the result, which you can read below.

 

Q1. Is it fair to say you are at odds with lots of psychology, theoretically? Can you outline why?

Psychology wants to understand what causes our behaviour. Cognitive psychology explanations are that behaviour is caused by internal states of the mind (or brain, if you like). These states are called mental representations, and they are models/simulations of the world that we use to figure out what to do and when to do it.

Cognitive psychology thinks we have representations because it assumes we have very poor sensory access to the world, e.g. vision supposedly begins with a patchy 2D image projected onto the back of the eye. We need these models to literally fill in the gaps by making an educated guess (‘inference’) about what caused those visual sensations.

My approach is called radical embodied cognitive psychology; ‘radical’ just means ‘no representations’. It is based on the work of James J Gibson. He was a perceptual psychologist who demonstrated that there is actually rich perceptual information about the world, and that we use this information. This is why perception and action are so amazingly successful most of the time, which is important because failures of perception have serious consequences for your health and wellbeing (e.g. falling on ice)

The most important consequence of this discovery is that when we have access to this information, we don’t need those internal models anymore. This then means that whatever the brain is doing, it’s not building models of the world in order to cause our behaviour. We are embedded in our environments and our behaviour is caused by the nature of that embedding (specifically, which information variables we are using for any given task).

So I ask very different questions than the typical psychologist: instead of ‘what mental model lets me solve this task?’ I ask ‘what information is there to support the observed behaviour and can I find evidence that we use it?’. When we get the right answer to the information question, we have great success in explaining and then predicting behaviour, which is actually the goal of psychology.

 

Q2. The idea that there are no mental representations is hard to get your head around. What about situations where behaviour seems to be based on things which aren’t there, like imagination, illusions or predictions?

First, saying that there are no mental representations is not saying that the brain is not up to something. This is a surprisingly common mistake, but I think it’s due to the fact cognitive psychologists have come to equate ‘brain activity’ with ‘representing’ and denying the latter means denying the former (see Is Embodied Cognition a No-Brainer?).

Illusions simply reveal how important it is to perception that we can move and explore. They are all based on a trick and they almost always require an Evil Psychologist™ lurking in the background. Specifically, illusions artificially restrict access to information so that the world looks like it’s doing one thing when it is really doing another. They only work if you don’t let people do anything to reveal the trick. Most visual illusions are revealed as such by exploring them, e.g by looking at them from a different perspective (e.g. the Ames Room).

Imagination and prediction are harder to talk about in this framework, but only because no one’s really tried. For what it’s worth, people are terrible at actively predicting things, and whatever imagination is it will be a side-effect of our ability to engage with the real world, not part of how we engage with the real world.

 

Q3. Is this radical approach really denying the reality of cognitive representations, or just using a different descriptive language in which they don’t figure? In other words, can you and the cognitivists both be right?

If the radical hypothesis is right, then a lot of cognitive theories will be wrong. Those theories all assume that information comes into the brain, is processed by representations and then output as behaviour. If we successfully replace representations with information, all those theories will be telling the wrong story. ‘Interacting with information’ is a completely different job description for the brain than ‘building models of the world’. This is another reason why it’s ‘radical’.

 

Q4. Even if I concede that you can think of the mind like this, can you convince me that I should? Why is it useful? What does this approach do for cognitive science that the conventional approach isn’t or cant’?

There are two reasons, I think. The first is empirical; this approach works very, very well. Whenever a researcher works through a problem using this approach, they find robust answers that stand up to extended scrutiny in the lab. These solutions then make novel predictions that also perform well  – examples are topics like the outfielder problem and the A-not-B error [see below for references]. Cognitive psychology is filled with small, difficult to replicate effects; this is actually a hint that we aren’t asking the right questions. Radical embodied cognitive science tends to produce large, robust and interpretable effects which I take as a hint that our questions are closer to the mark.

The second is theoretical. The major problem with representations is that it’s not clear where they get their content from. Representations supposedly encode knowledge about the world that we use to make inferences to support perception, etc. But if we have such poor perceptual contact with the world that we need representations, how did we ever get access to the knowledge we needed to encode? This grounding problem is a disaster. Radical embodiment solves it by never creating it in the first place – we are in excellent perceptual contact with our environments, so there are no gaps for representations to fill, therefore no representations that need content.

 

Q5. Who should we be reading to get an idea of this approach?

‘Beyond the Brain’ by Louise Barrett. It’s accessible and full of great stuff.

‘Radical Embodied Cognitive Science’ by Tony Chemero. It’s clear and well written but it’s pitched at trained scientists more than the generally interested lay person.

‘Embodied Cognition’ by Lawrence Shapiro that clearly lays out all the various flavours of ‘embodied cognition’. My work is the ‘replacement’ hypothesis.

‘The Ecological Approach to Visual Perception’ by James J Gibson is an absolute masterpiece and the culmination of all his empirical and theoretical work.

I run a blog at http://psychsciencenotes.blogspot.co.uk/ with Sabrina Golonka where we discuss all this a lot, and we tweet @PsychScientists. We’ve also published a few papers on this, the most relevant of which is ‘Embodied Cognition is Not What You Think It Is

 

Q6. And finally, can you point us to a few blog posts you’re proudest of which illustrate this way of looking at the world

What Else Could It Be? (where Sabrina looks at the question, what if the brain is not a computer?)

Mirror neurons, or, What’s the matter with neuroscience? (how the traditional model can get you into trouble)

Prospective Control – The Outfielder problem (an example of the kind of research questions we ask)

Fluctuating existence

The Neurologist has a fascinating case report of a women with Parkinson’s disease who experienced a fluctuating belief that she didn’t exist.

Cotard’s delusion is usually described as the ‘belief that you’re dead’ although Jules Cotard, for whom the delusion is named, defined it as a délire des négations – the delusion of negation, or nihilism, as it’s usually translated.

In fact, in his original case report, Cotard’s patient didn’t believe they were dead but that they had “no brain, nerves, chest, or entrails, and was just skin and bone”.

This new case report in The Neurologist describes a patient with Parkinson’s disease who experiences something similar with the delusion appearing as their Parkinson’s medication began to wear off.

In December 2010, she went to follow-up visit accompanied by her caregivers and they reported that, in the last 2 months, the patient has developed a sudden onset of nihilistic delusion, mainly during the “wearing-off” condition and associated with end of dose dyskinesias and akathisia. The patient repeatedly complained of having lost both of her eyes, mouth, nose, and ears. Often during these events, she insisted to have a mirror to see herself. She expressed the false belief that she did not have the whole body and that nothing existed, including herself, without any insight. This nihilistic delusion, compatible with the diagnosis of Cotard syndrome, clearly improved with the administration of the following dose of levodopa and the associated amelioration of motor symptoms.

This is interesting because the Parkinson’s medication – levodopa – is a precursor to dopamine and is used to increase dopamine levels in the brain.

Increased dopamine levels in mid-brain areas are considered to be a key causal factor in generating the delusions and hallucinations of psychosis, but in this case delusions reliably appeared as dopamine levels were likely to have been dropping due to the medication wearing off.

Although this is a single case study, the effect was reliable when repeated, but it doesn’t mean that this would happen with everybody in the same situation.

But what it really shows is that the neurobiology of psychosis is not a simple ‘chemical imbalance’ but, in part, a complex dysregulation that can effect individuals differently due to the inherent interconnectedness of neural systems.
 

Link to PubMed entry for case report.

Downsides of being a convincing liar

People who take shortcuts can trick themselves into believing they are smarter than they are, says Tom Stafford, and it comes back to bite them.

Honesty may be the best policy, but lying has its merits – even when we are deceiving ourselves. Numerous studies have shown that those who are practised in the art of self-deception might be more successful in the spheres of sport and business. They might even be happier than people who are always true to themselves. But is there ever a downside to believing our own lies?

An ingenious study by Zoe Chance of Yale University tested the idea, by watching what happens when people cheat on tests.

Chance and colleagues ran experiments which involved asking students to answer IQ and general knowledge questions. Half the participants were given a copy of the test paper which had – apparently in error – been printed with the answers listed at the bottom. This meant they had to resist the temptation to check or improve their answers against the real answers as they went along.

Irresistible shortcut

As you’d expect, some of these participants couldn’t help but cheat. Collectively, the group that had access to the answers performed better on the tests than participants who didn’t – even though both groups of participants were selected at random from students at the same university, so were, on average, of similar ability.  (We can’t know for sure who was cheating – probably some of the people who had answers would have got high scores even without the answers – but it means that the average performance in the group was partly down to individual smarts, and partly down to having the answers at hand.)

The crucial question for Chance’s research was this: did people in the “cheater” group know that they’d been relying on the answers? Or did they attribute their success in the tests solely to their own intelligence?

The way the researchers tested this was to ask the students to predict how well they’d do on a follow-up test. They were allowed to quickly glance over the second test sheet so that they could see that it involved the same kind of questions – and, importantly, that no answers had been mistakenly been printed at the bottom this time. The researchers reasoned that if the students who had cheated realised that cheating wasn’t an option the second time around, they should predict they wouldn’t do as well on this second test.

Not so. Self-deception won the day. The people who’d had access to the answers predicted, on average, that they’d get higher scores on the follow-up – equivalent to giving them something like a 10-point IQ boost. When tested, of course, they scored far lower.

The researchers ran another experiment to check that the effect was really due to the cheaters’ inflated belief in their own abilities. In this experiment, students were offered a cash reward for accurately predicting their scores on the second test. Sure enough, those who had been given the opportunity to cheat overestimated their ability and lost out – earning 20% less than the other students.

The implication is that people in Chance’s experiment – people very much like you and me – had tricked themselves into believing they were smarter than they were. There may be benefits from doing this – confidence, satisfaction, or more easily gaining the trust of others – but there are also certainly disadvantages. Whenever circumstances change and you need to accurately predict how well you’ll do, it can cost to believe you’re better than you are.

That self-deception has its costs has some interesting implications. Morally, most of us would say that self-deception is wrong. But aside from whether self-deception is undesirable, we should expect it to be present in all of us to some degree (because of the benefits), but to be limited as well (because of the costs).

Self-deception isn’t something that is always better in larger doses – there must be an amount of it for which the benefits outweigh the costs, most of the time. We’re probably all self-deceiving to some degree. The irony being, because it is self-deception, we can’t know how often.

This is my BBC Future article from last week. The original is here

The scientist as problem solver

97px-Herbert_simon_red_completeStart the week with one of the founding fathers of cognitive science: in ‘The scientist as problem solver‘, Herb Simon (1916-2001) gives a short retrospective of his scientific career.

To tell the story of the research he has done, he advances a thesis: “The Scientist is a problem solver. If the thesis is true, then we can dispense with a theory of scientific discovery – the processes of discovery are just applications of the processes of problem solving.”. Quite aside from the usefulness of this perspective, the paper is an reminder of intoxicating possibility of integration across the physical, biological and social sciences: Simon worked on economics, management theory, complex systems and artificial intelligence as well as what we’d call now cognitive psychology.

He uses his own work on designing problem solving algorithms to reflect on how he – and other scientists – can and should make scientific progress. Towards the end he expresses what would be regarded as heresy in many experimentally orientated psychology departments. He suggests that many of his most productive investigations lacked a contrast between experimental and control conditions. Did this mean they were worthless, he asks. No:

…You can test theoretical models without contrasting an experimental with a control condition. And apart from testing models, you can often make surprising observations that give you ideas for new or improved models…

Perhaps it is not our methodology that needs revising so much as the standard textbook methodology, which perversely warns us against running an experiment until precise hypotheses have been formulated and experimental and control conditions defined. How do such experiments ever create surprise – not just the all-too-common surprise of having our hypotheses refuted by facts, but the delight-provoking surprise of encountering a wholly unexpected phenomenon? Perhaps we need to add to the textbooks a chapter, or several chapters, describing how basic scientific discoveries can be made by observing the world intently, in the laboratory or outside it, with controls or without them, heavy with hypotheses or innocent of them.

REFERENCE
Simon, H. A. (1989). The scientist as problem solver. Complex information processing: The impact of Herbert A. Simon, 375-398.