How is the brain relevant in mental disorder?

The Psychologist has a fascinating article on how neuroscience fits in to our understanding of mental illness and what practical benefit brain science has – in lieu of the fact that it currently doesn’t really help us a great deal in the clinic.

It is full of useful ways of thinking about how neuroscience fits into our view of mental distress.

The following is a really crucial section, that talks about the difference between proximal (closer) and distal (more distant) causes.

In essence, rather than talking about causes we’re probably better off talking about causal pathways – chains of events that can lead to a problem – which can include common elements but different people can arrive at the same difficulty in different ways.

A useful notion is to consider different types of causes of symptoms lying on a spectrum, the extremes of which I will term ‘proximal’ and ‘distal’. Proximal causes are directly related to the mechanisms driving symptoms, and are useful targets for treatment; they are often identified through basic science research. For example, lung cancer is (proximally) caused by malfunction in the machinery that regulates cell division. Traditional lung cancer treatments tackle this cause by removing the malfunctioning cells (surgery) or killing them (standard chemotherapy and radiotherapy)…

By contrast, distal causes are indirectly related to the mechanisms driving symptoms, and are useful targets for prevention; they are often identified through epidemiology research. Again, take the example of lung cancer, which is (distally) caused by cigarette smoking in the majority of cases, though it must be caused by other factors in people who have never smoked. These could be genetic (lung cancer is heritable), other types of environmental trigger (e.g. radon gas exposure) or some interaction between the two. Given the overwhelming evidence that lung cancer is (distally) caused by smoking, efforts at prevention rightly focus on reducing its incidence. However, after a tumour has developed an oncologist must focus on the proximal cause when proposing a course of treatment…

The majority of studies of depression have focused on distal causes (which psychologists might consider ‘underlying’). These include: heritability and genetics; hormonal and immune factors; upbringing and early life experience; and personality. More proximal causes include: various forms of stress, particularly social; high-level psychological constructs derived from cognitive theories (e.g. dysfunctional negative schemata); low-level constructs such as negative information processing biases (also important in anxiety); and disrupted transmission in neurotransmitter systems such as serotonin.

It’s not a light read, but it is well worth diving into it for a more in-depth treatment of the brain and mental illness.
 

Link to Psychologist article neuroscience and mental health.

Mind Hacks excerpts x 2

This month, Business Insider have republished a couple of chapters from Mind Hacks the book (in case you missed it, back before the blog, Mind Hacks was a book, 101 do-it-at-home psychology experiences). The excerpts are:

1. Why one of these puzzles is easy and the other is hard – which is about the Wason Selection Task, a famous example of how our ability to reason logically can be confounded (and unconfounded if you find the right format to present a problem in).

2. Why this sentence is hard to understand – which shows you how to improve your writing with a bit of elementary psychology (hint: it is about reducing working memory load). Steven Pinker covers the same advice in his new book The Sense of Style (2014).

Both excerpts show off some of the neat illustrations done for the book, as well as being a personal nostalgia trip for yours truly (it’s been ten years!)

Links: Why this sentence is hard to understand + Why one of these puzzles is easy and the other is hard

Trauma is more complex than we think

I’ve got an article in The Observer about how the official definition of trauma keeps changing and how the concept is discussed as if it were entirely intuitive and clear-cut, when it’s actually much more complex.

I’ve become fascinated by how the concept of ‘trauma’ is used in public debate about mental health and the tension that arises between the clinical and rhetorical meanings of trauma.

One unresolved issue, which tests mental health professionals to this day, is whether ‘traumatic’ should be defined in terms of events or reactions.

Some of the confusion arises when we talk about “being traumatised”. Let’s take a typically horrifying experience – being caught in a war zone as a civilian. This is often described as a traumatic experience, but we know that most people who experience the horrors of war won’t develop post-traumatic stress disorder or PTSD – the diagnosis designed to capture the modern meaning of trauma. Despite the fact that these sorts of awful experiences increase the chances of acquiring a range of mental health problems – depression is actually a more common outcome than PTSD – it is still the case that most people won’t develop them. Have you experienced trauma if you have no recognisable “scar in the psyche”? This is where the concept starts to become fuzzy.

We have the official diagnosis of posttraumatic stress disorder or PTSD but actually lots of mental health problems can appear after awful events, and yet there is no ‘posttraumatic depression’ or ‘posttraumatic social phobia’ diagnoses.

To be clear, it’s not that trauma doesn’t exist but that it’s less fully developed as a concept than people think and, as a result, often over-simplified during debates.

Full article at the link below.
 

Link to Observer article on the shifting sands of trauma.

Spike activity 06-03-2015

Quick links from the past week in mind and brain news:

The strange world of felt presences. Great piece in The Guardian.

Nature reports that the Human Brain Project has voted for a change of leadership. But read carefully, it’s not clear how much will change in practice.

Surely the worst ‘neuroscience of’ article ever written? “The Neuroscience of ISIS” from The Daily Beast. Ruthlessly, it’s the first in a series.

Project Syndicate on why social science needs to be on the front-line of the fight against drug-resistant diseases.

Psychiatry is More Complex than Either its Proponents or its Critics Seem Able to Admit. Insightful piece on Mental Health Chat.

iDigitalTimes on what DeepMind’s computer game playing AI tells us where artificial intelligence falls short.

No link found between psychosis and use of ‘classic’ psychedelics LSD, psilocybin and mescaline in two large studies, reports Nature.

Beautiful online exhibition of the work of surreal optical illusion photographer Erik Johansson over at Twisted Sifter.

Radical embodied cognition: an interview with Andrew Wilson

adw_headshot_squareThe computational approach is the orthodoxy in psychological science. We try and understand the mind using the metaphors of information processing and the storage and retrieval of representations. These ideas are so common that it is easy to forget that there is any alternative. Andrew Wilson is on a mission to remind us that there is an alternative – a radical, non-representational, non-information processing take on what cognition is.

I sent him a few questions by email. After he answered these, and some follow up questions, we’ve both edited and agreed on the result, which you can read below.

 

Q1. Is it fair to say you are at odds with lots of psychology, theoretically? Can you outline why?

Psychology wants to understand what causes our behaviour. Cognitive psychology explanations are that behaviour is caused by internal states of the mind (or brain, if you like). These states are called mental representations, and they are models/simulations of the world that we use to figure out what to do and when to do it.

Cognitive psychology thinks we have representations because it assumes we have very poor sensory access to the world, e.g. vision supposedly begins with a patchy 2D image projected onto the back of the eye. We need these models to literally fill in the gaps by making an educated guess (‘inference’) about what caused those visual sensations.

My approach is called radical embodied cognitive psychology; ‘radical’ just means ‘no representations’. It is based on the work of James J Gibson. He was a perceptual psychologist who demonstrated that there is actually rich perceptual information about the world, and that we use this information. This is why perception and action are so amazingly successful most of the time, which is important because failures of perception have serious consequences for your health and wellbeing (e.g. falling on ice)

The most important consequence of this discovery is that when we have access to this information, we don’t need those internal models anymore. This then means that whatever the brain is doing, it’s not building models of the world in order to cause our behaviour. We are embedded in our environments and our behaviour is caused by the nature of that embedding (specifically, which information variables we are using for any given task).

So I ask very different questions than the typical psychologist: instead of ‘what mental model lets me solve this task?’ I ask ‘what information is there to support the observed behaviour and can I find evidence that we use it?’. When we get the right answer to the information question, we have great success in explaining and then predicting behaviour, which is actually the goal of psychology.

 

Q2. The idea that there are no mental representations is hard to get your head around. What about situations where behaviour seems to be based on things which aren’t there, like imagination, illusions or predictions?

First, saying that there are no mental representations is not saying that the brain is not up to something. This is a surprisingly common mistake, but I think it’s due to the fact cognitive psychologists have come to equate ‘brain activity’ with ‘representing’ and denying the latter means denying the former (see Is Embodied Cognition a No-Brainer?).

Illusions simply reveal how important it is to perception that we can move and explore. They are all based on a trick and they almost always require an Evil Psychologist™ lurking in the background. Specifically, illusions artificially restrict access to information so that the world looks like it’s doing one thing when it is really doing another. They only work if you don’t let people do anything to reveal the trick. Most visual illusions are revealed as such by exploring them, e.g by looking at them from a different perspective (e.g. the Ames Room).

Imagination and prediction are harder to talk about in this framework, but only because no one’s really tried. For what it’s worth, people are terrible at actively predicting things, and whatever imagination is it will be a side-effect of our ability to engage with the real world, not part of how we engage with the real world.

 

Q3. Is this radical approach really denying the reality of cognitive representations, or just using a different descriptive language in which they don’t figure? In other words, can you and the cognitivists both be right?

If the radical hypothesis is right, then a lot of cognitive theories will be wrong. Those theories all assume that information comes into the brain, is processed by representations and then output as behaviour. If we successfully replace representations with information, all those theories will be telling the wrong story. ‘Interacting with information’ is a completely different job description for the brain than ‘building models of the world’. This is another reason why it’s ‘radical’.

 

Q4. Even if I concede that you can think of the mind like this, can you convince me that I should? Why is it useful? What does this approach do for cognitive science that the conventional approach isn’t or cant’?

There are two reasons, I think. The first is empirical; this approach works very, very well. Whenever a researcher works through a problem using this approach, they find robust answers that stand up to extended scrutiny in the lab. These solutions then make novel predictions that also perform well  – examples are topics like the outfielder problem and the A-not-B error [see below for references]. Cognitive psychology is filled with small, difficult to replicate effects; this is actually a hint that we aren’t asking the right questions. Radical embodied cognitive science tends to produce large, robust and interpretable effects which I take as a hint that our questions are closer to the mark.

The second is theoretical. The major problem with representations is that it’s not clear where they get their content from. Representations supposedly encode knowledge about the world that we use to make inferences to support perception, etc. But if we have such poor perceptual contact with the world that we need representations, how did we ever get access to the knowledge we needed to encode? This grounding problem is a disaster. Radical embodiment solves it by never creating it in the first place – we are in excellent perceptual contact with our environments, so there are no gaps for representations to fill, therefore no representations that need content.

 

Q5. Who should we be reading to get an idea of this approach?

‘Beyond the Brain’ by Louise Barrett. It’s accessible and full of great stuff.

‘Radical Embodied Cognitive Science’ by Tony Chemero. It’s clear and well written but it’s pitched at trained scientists more than the generally interested lay person.

‘Embodied Cognition’ by Lawrence Shapiro that clearly lays out all the various flavours of ‘embodied cognition’. My work is the ‘replacement’ hypothesis.

‘The Ecological Approach to Visual Perception’ by James J Gibson is an absolute masterpiece and the culmination of all his empirical and theoretical work.

I run a blog at http://psychsciencenotes.blogspot.co.uk/ with Sabrina Golonka where we discuss all this a lot, and we tweet @PsychScientists. We’ve also published a few papers on this, the most relevant of which is ‘Embodied Cognition is Not What You Think It Is

 

Q6. And finally, can you point us to a few blog posts you’re proudest of which illustrate this way of looking at the world

What Else Could It Be? (where Sabrina looks at the question, what if the brain is not a computer?)

Mirror neurons, or, What’s the matter with neuroscience? (how the traditional model can get you into trouble)

Prospective Control – The Outfielder problem (an example of the kind of research questions we ask)

Fluctuating existence

The Neurologist has a fascinating case report of a women with Parkinson’s disease who experienced a fluctuating belief that she didn’t exist.

Cotard’s delusion is usually described as the ‘belief that you’re dead’ although Jules Cotard, for whom the delusion is named, defined it as a délire des négations – the delusion of negation, or nihilism, as it’s usually translated.

In fact, in his original case report, Cotard’s patient didn’t believe they were dead but that they had “no brain, nerves, chest, or entrails, and was just skin and bone”.

This new case report in The Neurologist describes a patient with Parkinson’s disease who experiences something similar with the delusion appearing as their Parkinson’s medication began to wear off.

In December 2010, she went to follow-up visit accompanied by her caregivers and they reported that, in the last 2 months, the patient has developed a sudden onset of nihilistic delusion, mainly during the “wearing-off” condition and associated with end of dose dyskinesias and akathisia. The patient repeatedly complained of having lost both of her eyes, mouth, nose, and ears. Often during these events, she insisted to have a mirror to see herself. She expressed the false belief that she did not have the whole body and that nothing existed, including herself, without any insight. This nihilistic delusion, compatible with the diagnosis of Cotard syndrome, clearly improved with the administration of the following dose of levodopa and the associated amelioration of motor symptoms.

This is interesting because the Parkinson’s medication – levodopa – is a precursor to dopamine and is used to increase dopamine levels in the brain.

Increased dopamine levels in mid-brain areas are considered to be a key causal factor in generating the delusions and hallucinations of psychosis, but in this case delusions reliably appeared as dopamine levels were likely to have been dropping due to the medication wearing off.

Although this is a single case study, the effect was reliable when repeated, but it doesn’t mean that this would happen with everybody in the same situation.

But what it really shows is that the neurobiology of psychosis is not a simple ‘chemical imbalance’ but, in part, a complex dysregulation that can effect individuals differently due to the inherent interconnectedness of neural systems.
 

Link to PubMed entry for case report.

Downsides of being a convincing liar

People who take shortcuts can trick themselves into believing they are smarter than they are, says Tom Stafford, and it comes back to bite them.

Honesty may be the best policy, but lying has its merits – even when we are deceiving ourselves. Numerous studies have shown that those who are practised in the art of self-deception might be more successful in the spheres of sport and business. They might even be happier than people who are always true to themselves. But is there ever a downside to believing our own lies?

An ingenious study by Zoe Chance of Yale University tested the idea, by watching what happens when people cheat on tests.

Chance and colleagues ran experiments which involved asking students to answer IQ and general knowledge questions. Half the participants were given a copy of the test paper which had – apparently in error – been printed with the answers listed at the bottom. This meant they had to resist the temptation to check or improve their answers against the real answers as they went along.

Irresistible shortcut

As you’d expect, some of these participants couldn’t help but cheat. Collectively, the group that had access to the answers performed better on the tests than participants who didn’t – even though both groups of participants were selected at random from students at the same university, so were, on average, of similar ability.  (We can’t know for sure who was cheating – probably some of the people who had answers would have got high scores even without the answers – but it means that the average performance in the group was partly down to individual smarts, and partly down to having the answers at hand.)

The crucial question for Chance’s research was this: did people in the “cheater” group know that they’d been relying on the answers? Or did they attribute their success in the tests solely to their own intelligence?

The way the researchers tested this was to ask the students to predict how well they’d do on a follow-up test. They were allowed to quickly glance over the second test sheet so that they could see that it involved the same kind of questions – and, importantly, that no answers had been mistakenly been printed at the bottom this time. The researchers reasoned that if the students who had cheated realised that cheating wasn’t an option the second time around, they should predict they wouldn’t do as well on this second test.

Not so. Self-deception won the day. The people who’d had access to the answers predicted, on average, that they’d get higher scores on the follow-up – equivalent to giving them something like a 10-point IQ boost. When tested, of course, they scored far lower.

The researchers ran another experiment to check that the effect was really due to the cheaters’ inflated belief in their own abilities. In this experiment, students were offered a cash reward for accurately predicting their scores on the second test. Sure enough, those who had been given the opportunity to cheat overestimated their ability and lost out – earning 20% less than the other students.

The implication is that people in Chance’s experiment – people very much like you and me – had tricked themselves into believing they were smarter than they were. There may be benefits from doing this – confidence, satisfaction, or more easily gaining the trust of others – but there are also certainly disadvantages. Whenever circumstances change and you need to accurately predict how well you’ll do, it can cost to believe you’re better than you are.

That self-deception has its costs has some interesting implications. Morally, most of us would say that self-deception is wrong. But aside from whether self-deception is undesirable, we should expect it to be present in all of us to some degree (because of the benefits), but to be limited as well (because of the costs).

Self-deception isn’t something that is always better in larger doses – there must be an amount of it for which the benefits outweigh the costs, most of the time. We’re probably all self-deceiving to some degree. The irony being, because it is self-deception, we can’t know how often.

This is my BBC Future article from last week. The original is here

The scientist as problem solver

97px-Herbert_simon_red_completeStart the week with one of the founding fathers of cognitive science: in ‘The scientist as problem solver‘, Herb Simon (1916-2001) gives a short retrospective of his scientific career.

To tell the story of the research he has done, he advances a thesis: “The Scientist is a problem solver. If the thesis is true, then we can dispense with a theory of scientific discovery – the processes of discovery are just applications of the processes of problem solving.”. Quite aside from the usefulness of this perspective, the paper is an reminder of intoxicating possibility of integration across the physical, biological and social sciences: Simon worked on economics, management theory, complex systems and artificial intelligence as well as what we’d call now cognitive psychology.

He uses his own work on designing problem solving algorithms to reflect on how he – and other scientists – can and should make scientific progress. Towards the end he expresses what would be regarded as heresy in many experimentally orientated psychology departments. He suggests that many of his most productive investigations lacked a contrast between experimental and control conditions. Did this mean they were worthless, he asks. No:

…You can test theoretical models without contrasting an experimental with a control condition. And apart from testing models, you can often make surprising observations that give you ideas for new or improved models…

Perhaps it is not our methodology that needs revising so much as the standard textbook methodology, which perversely warns us against running an experiment until precise hypotheses have been formulated and experimental and control conditions defined. How do such experiments ever create surprise – not just the all-too-common surprise of having our hypotheses refuted by facts, but the delight-provoking surprise of encountering a wholly unexpected phenomenon? Perhaps we need to add to the textbooks a chapter, or several chapters, describing how basic scientific discoveries can be made by observing the world intently, in the laboratory or outside it, with controls or without them, heavy with hypotheses or innocent of them.

REFERENCE
Simon, H. A. (1989). The scientist as problem solver. Complex information processing: The impact of Herbert A. Simon, 375-398.

Actually, still no good explanation of ‘that dress’

The last time I almost went blind staring at “that dress” was thanks to Liz Hurley and on this occasion I find myself equally unsatisfied.

I’ll spare you the introduction about the amazing blue/black or white/gold dress. But what’s left me rather disappointed are the numerous ‘science of the dress’ articles that have appeared everywhere and say they’ve explained the effect through colour constancy.

Firstly, this doesn’t explain what we want to know – which is why people differ in their perceptions, and secondly, I don’t think colour constancy is a good explanation on its own.

To explain a little, colour constancy is an effect of the human visual system where colours are perceived as being different depending on their context as the brain adjusts for things like assumed lighting and surroundings. Here’s a good and topical example from XKCD. The dress colours are the same in both pictures but the seem different because the background colour is different.

An important feature of the visual system is that the experience of colour is not a direct result of the wavelength of the light being emitted by the surface. The brain modifies the experiences to try and ensure that things appear the same colour in different lighting because if we just went off wavelength everything would wildly change colour as it moved through a world which is lit unevenly and has different colour light sources.

Visual illusions take advantange of this and there are plenty of examples where you can see that even completely physically identical colours can be perceived as markedly different shades if the image suggests one is in shadow and the other in direct light, for example.

Firstly, this isn’t an explanation of why people differ in perceiving the dress. In fact, all of the ‘science explanations’ have simply recounted how perceived colours can change but not the most important thing which is why people are having two stable but contradictory experiences.

Colour constancy works on everyone with normal colour vision. If you take the panels from the XKCD cartoon, people don’t markedly disagree about what the perceived colours are. The effect of each image is very reliable between individuals.

That’s not the case with the dress. Also, if you say context makes a difference, changing the surroundings of the dress should change the colours. It doesn’t.

Some have argued that individual assumptions about lighting in the picture are what’s making the difference. In other words, the context is the unconscious assumptions people make about lighting in the picture.

But if this is the case, this still isn’t an explanation because it doesn’t tell us why people have different assumptions. Psychologists called these top-down effects or, if we’re going to get Bayesian, perceptual priors.

75% of people in this BuzzFeed poll said they saw white/gold, 25% said they saw blue/black, and a small minority of people say they’ve seen the picture ‘flip’ between the two perceptions. How come?

And there’s actually a good test of the colour constancy or any other other ‘implicit interpretation’ explanation. You should be able to create images that alter the visual system’s assumptions and make perception of the dress reliably flip between white/gold and blue/black, as with the XKCD cartoon.

So, any vision scientists out there who can come up with a good explanation of why people differ in their perceptions? Psychophysicists, have I gone wildly off track?

Spike activity 28-02-2015

Quick links from the past week in mind and brain news:

Nautilus magazine has a good piece on behavioural economics and rethinking ‘nudges’. Although the rethink is really just another form of standard ‘nudge’.

The biggest hedge fund in the world, the $165 billion Bridgewater, starts an AI team to help give it the edge on investments reports Bloomberg. Well, they couldn’t get much worse than humans.

Gizmodo reports that a neuroscientist says he’ll do a head transplant ‘real soon now’. Hungover neuroscientist reads Gizmodo, thinks ‘I said what!?!’

The UK’s Post Office head of marketing has clearly been taken in by neuromarketing who thinks it will help them “better understand” their customers. Just like the stamp while we scan your brain…

The New York Times reports on pharma company Shire doing the old ‘disease marketing by the way I have a pill for that’ trick with DSM-5 newcomer binge eating disorder.

Hard Feelings: Science’s Struggle to Define Emotions. Good piece in The Atlantic.

The Human Brain Project is to be reorganised after a bit of a fuss (Americans: a significant crisis).

Being a asshole boss is bad for team performance. Interesting piece in Harvard Business Review.

The smart unconscious

We feel that we are in control when our brains figure out puzzles or read words, says Tom Stafford, but a new experiment shows just how much work is going on underneath the surface of our conscious minds.

It is a common misconception that we know our own minds. As I move around the world, walking and talking, I experience myself thinking thoughts. “What shall I have for lunch?”, I ask myself. Or I think, “I wonder why she did that?” and try and figure it out. It is natural to assume that this experience of myself is a complete report of my mind. It is natural, but wrong.

There’s an under-mind, all psychologists agree – an unconscious which does a lot of the heavy lifting in the process of thinking. If I ask myself what is the capital of France the answer just comes to mind – Paris! If I decide to wiggle my fingers, they move back and forth in a complex pattern that I didn’t consciously prepare, but which was delivered for my use by the unconscious.

The big debate in psychology is exactly what is done by the unconscious, and what requires conscious thought. Or to use the title of a notable paper on the topic, ‘Is the unconscious smart or dumb?‘ One popular view is that the unconscious can prepare simple stimulus-response actions, deliver basic facts, recognise objects and carry out practised movements. Complex cognition involving planning, logical reasoning and combining ideas, on the other hand, requires conscious thought.

A recent experiment by a team from Israel scores points against this position. Ran Hassin and colleagues used a neat visual trick called Continuous Flash Suppression to put information into participants’ minds without them becoming consciously aware of it. It might sound painful, but in reality it’s actually quite simple. The technique takes advantage of the fact that we have two eyes and our brain usually attempts to fuse the two resulting images into a single coherent view of the world. Continuous Flash Suppression uses light-bending glasses to show people different images in each eye. One eye gets a rapid succession of brightly coloured squares which are so distracting that when genuine information is presented to the other eye, the person is not immediately consciously aware of it. In fact, it can take several seconds for something that is in theory perfectly visible to reach awareness (unless you close one eye to cut out the flashing squares, then you can see the ‘suppressed’ image immediately).

Hassin’s key experiment involved presenting arithmetic questions unconsciously. The questions would be things like “9 – 3 – 4 = ” and they would be followed by the presentation, fully visible, of a target number that the participants were asked to read aloud as quickly as possible. The target number could either be the right answer to the arithmetic question (so, in this case, “2”) or a wrong answer (for instance, “1”). The amazing result is that participants were significantly quicker to read the target number if it was the right answer rather than a wrong one. This shows that the equation had been processed and solved by their minds – even though they had no conscious awareness of it – meaning they were primed to read the right answer quicker than the wrong one.

The result suggests that the unconscious mind has more sophisticated capacities than many have thought. Unlike other tests of non-conscious processing, this wasn’t an automatic response to a stimulus – it required a precise answer following the rules of arithmetic, which you might have assumed would only come with deliberation. The report calls the technique used “a game changer in the study of the unconscious”, arguing that “unconscious processes can perform every fundamental, basic-level function that conscious processes can perform”.

These are strong claims, and the authors acknowledge that there is much work to do as we start to explore the power and reach of our unconscious minds. Like icebergs, most of the operation of our minds remains out of sight. Experiments like this give a glimpse below the surface.

This is my BBC Future column from last week. The original is here

Spike activity 20-02-2015

Quick links from the past week in mind and brain news:

Interesting social mapping using subway journey data from Beijing reproted in New Scientist.

BPS Research Digest has compiled a comprehensive list of mind, brain and behaviour podcasts.

That study finding a surge of p values just below 0.05 in psychology, probably not a sign of bad science, reports Daniel Lakens with a new analysis.

The Toronto Star reports that psychologists terminated a study on implanting false crime memories early due to over-effectiveness.

Why do mirrors seem to reverse left and right but not up or down? Clear explanation in a great video from Physics Girl.

Vice has an interesting piece on public reactions to celebrities who become psychotic or begin to display unusual behaviour.

Science News has a map of ambient noisyness is America.

There’s an interesting interview with Facebook AI director Yann LeCun in IEEE Spectrum magazine.

Anti-vax: wrong but not irrational

badge

Since the uptick in outbreaks of measles in the US, those arguing for the right not to vaccinate their children have come under increasing scrutiny. There is no journal of “anti-vax psychology” reporting research on those who advocate what seems like a controversial, “anti-science” and dangerous position, but if there was we can take a good guess at what the research reported therein would say.

Look at other groups who hold beliefs at odds with conventional scientific thought. Climate sceptics for example. You might think that climate sceptics would be likely to be more ignorant of science than those who accept the consensus that humans are causing a global increase in temperatures. But you’d be wrong. The individuals with the highest degree of scientific literacy are not those most concerned about climate change, they are the group which is most divided over the issue. The most scientifically literate are also some of the strongest climate sceptics.

A driver of this is a process psychologists have called “biased assimilation” – we all regard new information in the light of what we already believe. In line with this, one study showed that climate sceptics rated newspaper editorials supporting the reality of climate change as less persuasive and less reliable than non-sceptics. Some studies have even shown that people can react to information which is meant to persuade them out of their beliefs by becoming more hardline – the exact opposite of the persuasive intent.

For topics such as climate change or vaccine safety, this can mean that a little scientific education gives you more ways of disagreeing with new information that don’t fit your existing beliefs. So we shouldn’t expect anti-vaxxers to be easily converted by throwing scientific facts about vaccination at them. They are likely to have their own interpretation of the facts.

High trust, low expertise

Some of my own research has looked at who the public trusted to inform them about the risks from pollution. Our finding was that how expert a particular group of people was perceived to be – government, scientists or journalists, say – was a poor predictor of how much they were trusted on the issue. Instead, what was critical was how much they were perceived to have the public’s interests at heart. Groups of people who were perceived to want to act in line with our respondents’ best interests – such as friends and family – were highly trusted, even if their expertise on the issue of pollution was judged as poor.

By implication, we might expect anti-vaxxers to have friends who are also anti-vaxxers (and so reinforce their mistaken beliefs) and to correspondingly have a low belief that pro-vaccine messengers such as scientists, government agencies and journalists have their best interests at heart. The corollary is that no amount of information from these sources – and no matter how persuasive to you and me – will convert anti-vaxxers who have different beliefs about how trustworthy the medical establishment is.

Interestingly, research done by Brendan Nyhan has shown many anti-vaxxers are willing to drop mistaken beliefs about vaccines, but as they do so they also harden in their intentions not to get their kids vaccinated. This shows that the scientific beliefs of people who oppose vaccinations are only part of the issue – facts alone, even if believed, aren’t enough to change people’s views.

Reinforced memories

We know from research on persuasion that mistaken beliefs aren’t easily debunked. Not only is the biased assimilation effect at work here but also the fragility of memory – attempts at debunking myths can serve to reinforce the memory of the myth while the debunking gets forgotten.

The vaccination issue provides a sobering example of this. A single discredited study from 1998 claimed a link between autism and the MMR jab, fuelling the recent distrust of vaccines. No matter how many times we repeat that “the MMR vaccine doesn’t cause autism”, the link between the two is reinforced in people’s perceptions. To avoid reinforcing a myth, you need to provide a plausible alternative – the obvious one here is to replace the negative message “MMR vaccine doesn’t cause autism”, with a positive one. Perhaps “the MMR vaccine protects your child from dangerous diseases”.

Rational selfishness

There are other psychological factors at play in the decisions taken by individual parents not to vaccinate their children. One is the rational selfishness of avoiding risk, or even the discomfort of a momentary jab, by gambling that the herd immunity of everyone else will be enough to protect your child.

Another is our tendency to underplay rare events in our calculation about risks – ironically the very success of vaccination programmes makes the diseases they protect us against rare, meaning that most of us don’t have direct experience of the negative consequences of not vaccinating. Finally, we know that people feel differently about errors of action compared to errors of inaction, even if the consequences are the same.

Many who seek to persuade anti-vaxxers view the issue as a simple one of scientific education. Anti-vaxxers have mistaken the basic facts, the argument goes, so they need to be corrected. This is likely to be ineffective. Anti-vaxxers may be wrong, but don’t call them irrational.

Rather than lacking scientific facts, they lack a trust in the establishments which produce and disseminate science. If you meet an anti-vaxxer, you might have more luck persuading them by trying to explain how you think science works and why you’ve put your trust in what you’ve been told, rather than dismissing their beliefs as irrational.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Oliver Sacks: “now I am face to face with dying”

In a moving and defiant article for the The New York Times, neurologist Oliver Sacks has announced he has terminal cancer.

Over the last few days, I have been able to see my life as from a great altitude, as a sort of landscape, and with a deepening sense of the connection of all its parts. This does not mean I am finished with life.

On the contrary, I feel intensely alive, and I want and hope in the time that remains to deepen my friendships, to say farewell to those I love, to write more, to travel if I have the strength, to achieve new levels of understanding and insight.

This will involve audacity, clarity and plain speaking; trying to straighten my accounts with the world. But there will be time, too, for some fun (and even some silliness, as well).

The whole piece is a reflection on life, death and living and, fittingly, is a joy to read.

Keep on keepin’ on Dr Sacks.

We look forward to hearing about your final adventures.
 

Link to ‘My Own Life: Oliver Sacks on Learning He Has Terminal Cancer’.

Half a century of neuroscience

The Lancet has a good retrospective looking back on the last 50 years of neuroscience, which in some ways, was when the field was born.

Of course, the brain and nervous system has been the subject of study for hundreds, if not thousands, of years but the concept of a dedicated ‘neuroscience’ is relatively new.

The term ‘neuroscience’ was first used in 1962 by biologist Francis Schmitt who previously referred to the integrated study of mind, brain and behaviour by the somewhat less catchy title “biophysics of the mind”. The first undergraduate degree in neuroscience was offered by Amherst College only in 1973.

The Lancet article, by one of the first generation ‘neuroscientists’ Steven Rose, looks back at how the discipline began in the UK (in a pub, as most things do) and then widens his scope to review how neuroscience has transformed over the last 50 years.

But many of the problems that had beset the early days remain unresolved. Neuroscience may be a singular label, but it embraces a plurality of disciplines. Molecular and cognitive neuroscientists still scarcely speak a common language, and for all the outpouring of data from the huge industry that neuroscience has become, Schmitt’s hoped for bridging theories are still in short supply. At what biological level are mental functions to be understood? For many of the former, reductionism rules and the collapse of mind into brain is rarely challenged—there is even a society for “molecular and cellular cognition”—an elision hardly likely to appeal to the cognitivists who regard higher order mental functions as emergent properties of the brain as a system.

It’s an interesting reflection on how neuroscience has changed over its brief lifespan from one of the people who were there at the start.
 

Link to ’50 years of neuroscience’ in The Lancet.

Spike activity 13-02-2015

Quick links from the past week in mind and brain news:

US Governor proposes that welfare recipients should be drug screened and have negative results as a condition for a payment. A fascinating Washington Post piece looks at past data on similar schemes.

BPS Research Digest launches the PsychCrunch podcast. First episode: evidence-based dating.

The brain, interrupted: neurodevelopment and the pre-term baby. Excellent Nature piece.

Fusion has a great piece on *how* we should worry about artificial intelligence.

“The world’s first hotel staffed entirely by robots is set to open in Japan” reports the International Business Times. Clearly they’ve never visited a Travelodge.

Forbes reports on the ‘coming boom in brain medicines’. Personally, I won’t be holding my breath.

There’s an excellent update on new psychoactive substances and synthetics drugs over at Addiction Inbox.

The Scientific 23 is a great site that interviews scientists and there are lots of cognitive scientists discussing their work.

Follow

Get every new post delivered to your Inbox.

Join 24,921 other followers