Do we suffer ‘behavioural fatigue’ for pandemic prevention measures?

The Guardian recently published an article saying “People won’t get ‘tired’ of social distancing – and it’s unscientific to suggest otherwise”. “Behavioural fatigue” the piece said, “has no basis in science”.

‘Behavioural fatigue’ became a hot topic because it was part of the UK Government’s justification for delaying the introduction of stricter public health measures. They quickly reversed this position and we’re now in the “empty streets” stage of infection control.

But it’s an important topic and is relevant to all of us as we try to maintain important behavioural changes that benefit others.

For me, one key point is that, actually, there are many relevant scientific studies that tackle this. And I have to say, I’m a little disappointed that there were some public pronouncements that ‘there is no evidence’ in the mainstream media without anyone making the effort to seek it out.

The reaction to epidemics has actually been quite well studied although it’s not clear that ‘fatigue’ is the right way of understanding any potential decline in people’s compliance. This phrase doesn’t seem to be used in the medical literature in this context and it may well have been simply a convenient, albeit confusing, metaphor for ‘decline’ used in interviews.

In fact, most studies of changes in compliance focus on the effect of changing risk perception, and it turns out that this often poorly tracks the actual risk. Below is a graph from a recent paper illustrating a widely used model of how risk perception tracks epidemics.

Notably, this model was first published in the 1990s based on data available even then. It suggests that increases in risk tend to make us over-estimate the danger, particularly for surprising events, but then as the risk objectively increases we start to get used to living in the ‘new normal’ and our perception of risk decreases, sometimes unhelpfully so.

What this doesn’t tell us is whether people’s behaviour changes over time. However, lots of studies have been done since then, including on the 2009 H1N1 flu pandemic – where a lot of this research was conducted.

To cut a long story short, many, but not all, of these studies find that people tend to reduce their use of at least some preventative measures (like hand washing, social distancing) as the epidemic increases, and this has been looked at in various ways.

When asking people to report their own behaviours, several studies found evidence for a reduction in at least some preventative measures (usually alongside evidence for good compliance with others).

This was found was found in one study in Italy, two studies in Hong Kong, and one study in Malaysia.

In Holland during the 2006 bird flu outbreak, one study did seven follow-ups and found a fluctuating pattern of compliance with prevention measures. People ramped up their prevention efforts, then their was a dip, then they increased again.

Some studies have looked for objective evidence of behaviour change and one of the most interesting looked at changes in social distancing during the 2009 outbreak in Mexico by measuring television viewing as a proxy for time spent in the home. This study found that, consistent with an increase in social distancing at the beginning of the outbreak, television viewing greatly increased, but as time went on, and the outbreak grew, television viewing dropped. To try and double-check their conclusions, they showed that television viewing predicted infection rates.

One study looked at airline passengers’ missed flights during the 2009 outbreak – given that flying with a bunch of people in an enclosed space is likely to spread flu. There was a massive spike of missed flights at the beginning of the pandemic but this quickly dropped off as the infection rate climbed, although later, missed flights did begin to track infection rates more closely.

There are also some relevant qualitative studies. These are where people are free-form interviewed and the themes of what they say are reported. These studies reported that people resist some behavioural measures during outbreaks as they increasingly start to conflict with family demands, economic pressures, and so on.

Rather than measuring people’s compliance with health behaviours, several studies looked at how epidemics change and used mathematical models to test out ideas about what could account for their course.

One well recognised finding is that epidemics often come in waves. A surge, a quieter period, a surge, a quieter period, and so on.

Several mathematical modelling studies have suggested that people’s declining compliance with preventative measures could account for this. This has been found with simulated epidemics but also when looking at real data, such as that from the 1918 flu pandemic. The 1918 epidemic was an interesting example because there was no vaccine and so behavioural changes were pretty much the only preventative measure.

And some studies showed no evidence of ‘behavioural fatigue’ at all.

One study in the Netherlands showed a stable increase in people taking preventative measures with no evidence of decline at any point.

Another study conducted in Beijing found that people tended to maintain compliance with low effort measures (ventilating rooms, catching coughs and sneezes, washing hands) and tended to increase the level of high effort measures (stockpiling, buying face masks).

This improved compliance was also seen in a study that looked at an outbreak of the mosquito-borne disease chikungunya.

This is not meant to be a complete review of these studies (do add any others below) but I’m presenting them here to show that actually, there is lots of relevant evidence about ‘behavioural fatigue’ despite the fact that mainstream articles can get published by people declaring it ‘has no basis in science’.

In fact, this topic is almost a sub-field in some disciplines. Epidemiologists have been trying to incorporate behavioural dynamics into their models. Economists have been trying to model the ‘prevalence elasticity’ of preventative behaviours as epidemics progress. Game theorists have been creating models of behaviour change in terms of individuals’ strategic decision-making.

The lessons here are two fold I think.

The first is for scientists to be cautious when taking public positions. This is particularly important in times of crisis. Most scientific fields are complex and can be opaque even to other scientists in closely related fields. Your voice has influence so please consider (and indeed research) what you say.

The second is for all of us. We are currently in the middle of a pandemic and we have been asked to take essential measures.

In past pandemics, people started to drop their life-saving behavioural changes as the risk seemed to become routine, even as the actual danger increased.

This is not inevitable, because in some places, and in some outbreaks, people managed to stick with them.

We can be like the folks who stuck with these strange new rituals, who didn’t let their guard down, and who saved the lives of countless people they never met.

Why we need to get better at critiquing psychiatric diagnosis

This piece is based on my talk to the UCL conference ‘The Role of Diagnosis in Clinical Psychology’. It was aimed at an audience of clinical psychologists but should be of interest more widely.

I’ve been a longterm critic of psychiatric diagnoses but I’ve become increasingly frustrated by the myths and over-generalisations that get repeated and recycled in the diagnosis debate.

So, in this post, I want to tackle some of these before going on to suggest how we can critique diagnosis more effectively. I’m going to be referencing the DSM-5 but the examples I mention apply more widely.

“There are no biological tests for psychiatric diagnoses”

“The failure of decades of basic science research to reveal any specific biological or psychological marker that identifies a psychiatric diagnosis is well recognised” wrote Sami Timini in the International Journal of Clinical and Health Psychology. “Scientists have not identified a biological cause of, or even a reliable biomarker for, any mental disorder” claimed Brett Deacon in Clinical Psychology Review. “Indeed”, he continued “not one biological test appears as a diagnostic criterion in the current DSM-IV-TR or in the proposed criteria sets for the forthcoming DSM-5”. Jay Watts writing in The Guardian states that “These categories cannot be verified with objective tests”.

Actually there are very few DSM diagnoses for which biological tests are entirely irrelevant. Most use medical tests for differential diagnosis (excluding other causes), some DSM diagnoses require them as one of a number of criteria, and a handful are entirely based on biological tests. You can see this for yourself if you take the radical scientific step of opening the DSM-5 and reading what it actually says.

There are some DSM diagnoses (the minority) for which biological tests are entirely irrelevant. Body dysmorphic disorder (p242), for example, a diagnosis that describes where people become overwhelmed with the idea that a part of their body is misshapen or unattractive, is purely based on reported experiences and behaviour. No other criteria are required or relevant.

For most common DSM diagnoses, biological tests are relevant but for the purpose of excluding other causes. For example, in many DSM diagnoses there is a general exclusion that the symptoms must be not attributable to the physiological effects of a substance or another medical condition (this appears in schizophrenia, OCD, generalized anxiety disorder and many many others). On occasion, very specific biological tests are mentioned. For example, to make a confident diagnosis of panic disorder (p208), the DSM-5 recommends testing serum calcium levels to exclude hyperparathyroidism – which can produce similar symptoms.

Additionally, there are a range of DSM diagnoses for which biomedical tests make up one or more of the formally listed criteria but aren’t essential to make the diagnosis. The DSM diagnosis of narcolepsy (p372) is one example, which has two such criteria: “Hypocretin deficiency, as measured by cerebrospinal fluid (CSF) hypocretin-1 immunoreactivity values of one-third or less of those obtained in healthy subjects using the same assay, or 110 pg/mL or less” and polysomnography showing REM sleep latency of 15 minutes or less. Several other diagnoses work along these lines – where a biomedical tests results are listed but are not necessary to make the diagnosis: the substance/medication-induced mental disorders, delirium, neuroleptic malignant syndrome, neurocognitive disorders, and so on.

There are also a range of DSM diagnoses that are not solely based on biomedical tests but for which positive test results are necessary for the diagnosis. Anorexia nervosa (p338) is the most obvious, which requires the person to have a BMI of less than 17, but this applies to various sleep disorders (e.g. REM sleep disorder which requires a positive polysomnography or actigraphy finding) and some disorders due to other medical conditions. For example, neurocognitive disorder due to prion disease (p634) requires a brain scan or blood test.

There are some DSM diagnoses which are based exclusively on biological test results. These are a number of sleep disorders (obstructive sleep apnea hypopnea, central sleep apnea and sleep-related hypoventilation, all diagnosed with polysomnography).

“Psychiatric diagnoses ‘label distress'”

The DSM, wrote Peter Kinderman and colleagues in Evidence-Based Mental Health is a “franchise for the classification and diagnosis of human distress”. The “ICD is based on exactly the same principles as the DSM” argued Lucy Johnstone, “Both systems are about describing people’s distress in terms of medical diagnosis”

In reality, some psychiatric diagnoses do classify distress, some don’t.

Here is a common criterion in many DSM diagnoses: “The symptoms cause clinical significant distress or impairment in social, occupational or other important areas of functioning”

The theory behind this is that some experiences or behaviours are not considered of medical interest unless they cause you problems, which is defined as distress or impairment. Note however, that it is one or the other. It is still possible to be diagnosed if you’re not distressed but still find these experiences or behaviours get in the way of everyday life.

However, there are a whole range of DSM diagnoses for which distress plays no part in making the diagnosis.

Here is a non-exhaustive list: Schizophrenia, Tic Disorders, Delusional Disorder, Developmental Coordination Disorder, Brief Psychotic Disorder, Schizophreniform Disorder, Manic Episode, Hypomanic Episode, Schizoid Personality Disorder, Antisocial Personality Disorder, and so on. There are many more.

Does the DSM ‘label distress’? Sometimes. Do all psychiatric diagnoses? No they don’t.

“Psychiatric diagnoses are not reliable”

The graph below shows the inter-rater reliability results from the DSM-5 field trial study. They use a statistical test called Cohen’s Kappa to test how well two independent psychiatrists, assessing the same individual through an open interview, agree on a particular diagnosis. A score above 0.8 is usually considered gold standard, they rate anything above 0.6 in the acceptable range.

The results are atrocious. This graph is often touted as evidence that psychiatric diagnoses can’t be made reliably.

However, here are the results from a study that tested diagnostic agreement on a range of DSM-5 diagnoses when psychiatrists used a structured interview assessment. Look down the ‘κ’ column for the reliability results. Suddenly they are much better and are all within the acceptable to excellent range.

This is well-known in mental health and medicine as a whole. If you want consistency, you have to use a structured assessment method.

While we’re here, let’s tackle an implicit assumption that underlies many of these critiques: supposedly, psychiatric diagnoses are fuzzy and unreliable, whereas the rest of medicine makes cut-and-dry diagnoses based on unequivocal medical test results.

This is a myth based on ignorance about how medical diagnoses are made – almost all involve human judgement. Just look at the between-doctor agreement results for some diagnoses in the rest of medicine (which include the use of biomedical tests):

Diagnosis of infection at the site of surgery (0.44), features of spinal tumours (0.19 – 0.59), bone fractures in children (0.71), rectal bleeding (0.42), paediatric stroke (0.61), osteoarthritis in the hand (0.60 – 0.82). There are many more examples in the medical literature which you can see for yourself.

The reliability of DSM-5 diagnoses is typically poor for ‘off the top of the head’ diagnosis but this can be markedly improved by using a formal diagnostic assessment. This doesn’t seem to be any different from the rest of medicine.

“Psychiatric diagnoses are not valid because they are decided by a committee”

I’m sorry to break it to you, but all medical diagnoses are decided by committee.

These committees shift the boundaries, revise, reject and resurrect diagnoses across medicine. The European Society of Cardiology revise the diagnostic criteria for heart failure and related problems on a yearly basis. The International League Against Epilepsy revise their diagnoses of different epilepsies frequently – they just published their revised manual earlier this year. In 2014 they broadened the diagnostic criteria for epilepsy meaning more people are now classified as having epilepsy. Nothing changed in people’s brains, they just made a group decision.

In fact, if you look at the medical literature, it’s abuzz with committees deciding, revising and rejecting diagnostic criteria for medical problems across the board.

Humans are not cut-and-dry. Neither are most illnesses, diseases and injuries, and decisions about what a particular diagnosis should include is always a trade-off between measurement accuracy, suffering, outcome, and the potential benefits of intervention. This gets revised by a committee who examine the best evidence and come to a consensus on what should count as a medically-relevant problem.

These committees aren’t perfect. They sometimes suffer from fads and group think, and pharmaceutical industry conflicts of interest are a constant concern, but the fact that a committee decides a diagnosis does not make it invalid. I would argue that psychiatry is more prone to fads and pressure from pharmaceutical company interests than some other areas of medicine although it’s probably not the worst (surgery is notoriously bad in this regard). However, having a diagnosis decided by committee doesn’t make it invalid. Actually, on balance, it’s probably the least worst way of doing it.

“Psychiatric diagnoses are not valid because they’re based on experience, behaviour or value judgements”

We’ve discussed above how DSM diagnoses rely on medical tests to varying degrees. But the flip side of this, is that there are many non-psychiatric diagnoses which are also only based on classifying experience and/or behaviour. If you think this makes a diagnosis invalid or ‘not a real illness’ I look forward to your forthcoming campaigning to remove the diagnoses of tinnitus, sensory loss, many pain syndromes, headache, vertigo and the primary dystonias, for example.

To complicate things further, we know some diseases have a clear basis in terms of tissue damage but the diagnosis is purely based on experience and/or behaviour. The diagnosis of Parkinson’s disease, for example, is made this way and there are no biomedical tests that confirm the condition, despite the fact that studies have shown it occurs due to a breakdown of dopamine neurons in the nigrostriatal pathway of the brain.

At this point, someone usually says “but no one doubts that HIV or tuberculosis are diseases, whereas psychiatric diagnosis involves arbitrary decisions about what is considered pathological”. Cranks aside, the first part is true. It’s widely accepted – rightly so – that HIV and tuberculosis are diseases. However, it’s interesting how many critics of psychiatric diagnosis seem to have infectious diseases as their comparison for what constitutes a ‘genuine medical condition’ when infectious diseases are only a small minority of the diagnoses in medicine.

Even here though, subjectivity still plays a part. Rather than focusing on a single viral or bacterial infection, think of all viruses and bacteria. Now ask, which should be classified as diseases? This is not as cut-and-dry as you might think because humans are awash with viruses and bacteria, some helpful, some unhelpful, some irrelevant to our well-being. Ed Yong’s book I Contain Multitudes is brilliant on this if you want to know more about the massive complexity of our microbiome and how it relates to our well-being.

So the question for infectious disease experts is at what point does an unhelpful virus or bacteria become a disease? This involves making judgements about what should be considered a ‘negative effect’. Some are easy calls to make – mortality statistics are a fairly good yardstick. No one’s argued over the status of Ebola as a disease. But some cases are not so clear. In fact, the criteria for what constitutes a disease, formally discussed as how to classify the pathogenicity of microorganisms, can be found as a lively debate in the medical literature.

So all diagnoses in medicine involve a consensus judgement about what counts as ‘bad for us’. There is no biological test that which can answer this question in all cases. Value judgements are certainly more common in psychiatry than infectious diseases but probably less so than in plastic surgery, but no diagnosis is value-free.

“Psychiatric diagnosis isn’t valid because of the following reasons…”

Debating the validity of diagnoses is a good thing. In fact, it’s essential we do it. Lots of DSM diagnoses, as I’ve argued before, poorly predict outcome, and sometimes barely hang together conceptually. But there is no general criticism that applies to all psychiatric diagnoses. Rather than going through all the diagnoses in detail, look at the following list of DSM-5 diagnoses and ask yourself whether the same commonly made criticisms about ‘psychiatric diagnosis’ could be applied to them all:

Tourette’s syndrome, Insomnia, Erectile Disorder, Schizophrenia, Bipolar, Autism, Dyslexia, Stuttering, Enuerisis, Catatonia, PTSD, Pica, Sleep Apnea, Pyromania, Medication-Induced Acute Dystonia, Intermittent Explosive Disorder

Does psychiatric diagnosis medicalise distress arising from social hardship? Hard to see how this applies to stuttering and Tourette’s syndrome. Is psychiatric diagnosis used to oppress people who behave differently? If this applies to sleep apnea, I must have missed the protests. Does psychiatric diagnosis privilege biomedical explanations? I’m not sure this applies to PTSD.

There are many good critiques on the validity of specific psychiatric diagnoses, it’s impossible to see how they apply to all diagnoses.

How can we criticise psychiatric diagnosis better?

I want to make clear here that I’m not a ‘defender’ of psychiatric diagnosis. On a personal basis, I’m happy for people to use whatever framework they find useful to understand their own experiences. On a scientific basis, some diagnoses seem reasonable but many are a really poor guide to human nature and its challenges. For example, I would agree with other psychosis researchers that the days of schizophrenia being a useful diagnosis are numbered. By the way, this is not a particularly radical position – it has been one of the major pillars of the science of cognitive neuropsychiatry since it was founded.

However, I would like to think I am a defender of actually engaging with what you’re criticising. So here’s how I think we could move the diagnosis debate on.

Firstly, RTFM. Read the fucking manual. I’m sorry, but I’ve got no time for criticisms that can be refuted simply by looking at the thing you’re criticising. Saying there are no biological tests for DSM diagnoses is embarrassing when some are listed in the manual. Saying the DSM is about ‘labelling distress’ when many DSM diagnoses do not will get nothing more than an eye roll from me.

Secondly, we need be explicit about what we’re criticising. If someone is criticising ‘psychiatric diagnosis’ as a whole, they’re almost certainly talking nonsense because it’s a massively diverse field. Our criticisms about medicalisation, poor predictive validity and biomedical privilege may apply very well to schizophrenia, but they make little sense when we’re talking about sleep apnea or stuttering. Diagnosis can really only be coherently criticised on a case by case basis or where you have demonstrated that a particular group of diagnoses share particular characteristics – but you have to establish this first.

As an aside, restricting our criticisms to ‘functional psychiatric diagnosis’ will not suddenly make these arguments coherent. ‘Functional psychiatric diagnoses’ include Tourette’s syndrome, stuttering, dyslexia, erectile disorder, enuerisis, pica and insomnia to name but a few. Throwing them in front of the same critical cross-hairs as borderline personality disorder makes no sense. I did a whole talk on this if you want to check it out.

Thirdly, let’s stop pretending this isn’t about power and inter-professional rivalries. Many people have written very lucidly about how diagnosis is one of the supporting pillars in the power structure of psychiatry. This is true. The whole point of structural analysis is that concept, practice and power are intertwined. We criticise diagnosis, we are attacking the social power of psychiatry. This is not a reason to avoid it, and doesn’t mean this is the primary motivation, but we need to be aware of what we’re doing. Pretending we’re criticising diagnosis but not taking a swing at psychiatry is like calling someone ugly but saying it’s nothing against them personally. We should be working for a better and more equitable approach to mental health – and that includes respectful and conscious awareness of the wider implications of our actions.

Also, let’s not pretend psychology isn’t full of classifications. Just because they’re not published by the APA, doesn’t mean they’re any more valid or have the potential to be any more damaging (or indeed, the potential to be any more liberating). And if you are really against classifying experience and behaviour in any way, I recommend you stop using language, because it relies on exactly this.

Most importantly though, this really isn’t about us as professionals. The people most affected by these debates are ultimately people with mental health problems, often with the least power to make a difference to what’s happening. This needs to change and we need to respect and include a diversity of opinion and lived experience concerning the value of diagnosis. Some people say that having a psychiatric diagnosis is like someone holding their head below water, others say it’s the only thing that keeps their head above water. We need a system that supports everyone.

Finally, I think we’d be better off if we treated diagnoses more like tools, and less like ideologies. They may be more or less helpful in different situations, and at different times, and for different people, and we should strive to ensure a range of options are available to people who need them, both diagnostic and non-diagnostic. Each tested and refined with science, meaning, lived experience, and ethics.

Should we stop saying ‘commit’ suicide?

There is a movement in mental health to avoid the phrase ‘commit suicide’. It is claimed that the word ‘commit’ refers to a crime and this increases the stigma for what’s often an act of desperation that deserves compassion, rather than condemnation.

The Samaritans’ media guidelines discourage using the phrase, advising: “Avoid labelling a death as someone having ‘committed suicide’. The word ‘commit’ in the context of suicide is factually incorrect because it is no longer illegal”. An article in the Australian Psychological Society’s InPsych magazine recommended against it because the word ‘commit’ signifies not only a crime but a religious sin. There are many more such claims.

However, on the surface level, claims that the word ‘commit’ necessarily indicates a crime are clearly wrong. We can ‘commit money’ or ‘commit errors’, for instance, where no crime is implied. The dictionary entry for ‘commit’ (e.g. see the definition at the OED) has entries related to ‘committing a crime’ as only a few of its many meanings.

But we can probably do a little better when considering the potentially stigmatising effects of language than simply comparing examples.

One approach is to see how the word is actually used by examining a corpus of the English language – a database of written and transcribed spoken language – and using a technique called collocation analysis that looks at which words appear together.

I’ve used the Corpus of Contemporary American English collocation analysis for the results below and you can do the analysis yourself if you want to see what it looks like.

So here are the top 30 words that follow the word ‘commit’, in order of frequency in the corpus.

Some of the words are clearly parts of phrases (‘commit ourselves…’) rather than directly referring to actions but you can see that most common two word phrase is ‘commit suicide’ by a very large margin.

If we take this example, the argument for not using ‘commit suicide’ gets a bit circular but if we look at the other named actions as a whole, they’re all crimes or potential crimes. Essentially, they’re all fairly nasty.

If you do the analysis yourself (and you’ll have to go to the website and type in the details, you can’t link directly) you’ll see that non-criminal actions don’t appear until fairly low down the list, way past the 30 listed here.

So ‘commit’ typically refers to antisocial and criminal acts. Saying ‘commit suicide’ probably brings some of that baggage with it and we’re likely to be better off moving away from it.

It’s worth saying, I’m not a fan of prohibitions on words or phrases, as it tends to silence people who have only colloquial language at their disposal to advocate for themselves.

As this probably includes most people with mental health problems, only a minority of which will be plugged into debates around language, perhaps we are better off thinking about moving language forward rather than punishing the non-conforming.

Not the psychology of Joe average terrorist

News reports have been covering a fascinating study on the moral reasoning of ‘terrorists’ published in Nature Human Behaviour but it’s worth being aware of the wider context to understand what it means.

Firstly, it’s important to highlight how impressive this study is. The researchers, led by Sandra Baez, managed to complete the remarkably difficult task of getting access to, and recruiting, 66 jailed paramilitary fighters from the Colombian armed conflict to participate in the study.

They compared this group to 66 matched ‘civilians’ with no criminal background and 13 jailed murderers with no paramilitary connections, on a moral reasoning task.

The task involved 24 scenarios that varied in two important ways: harm and no harm, and intended and unintended actions. Meaning the researchers could compare across four situations – no harm, accidental harm, unsuccessfully attempted harm, and successfully attempted harm.

A consistent finding was that paramilitary participants consistently judged accidental harm as less acceptable than other groups, and intentional harm as more acceptable than others groups, indicating a distortion in moral reasoning.

They also measured cognitive function, emotion recognition and aggressive tendencies and found that when these measures were included in the analysis, they couldn’t account for the results.

One slightly curious thing in the paper though, and something the media has run with, is that the authors describe the background of the paramilitary participants and then discuss the implications for understanding ‘terrorists’ throughout.

But some context on the Colombian armed conflict is needed here.

The participants were right-wing paramilitaries who took part in the demobilisation agreement of 2003. This makes them members of the Autodefensas Unidas de Colombia or AUC – a now defunct organisation who were initially formed by drug traffickers and land owners to combat the extortion and kidnapping of the left-wing Marxist paramilitary organisations – mostly notably the FARC.

The organisation was paramilitary in the traditional sense – with uniforms, a command structure, local and regional divisions, national commanders, and written statutes. It involved itself in drug trafficking, extortion, torture, massacres, targeted killings, and ‘social cleansing’ of civilians assumed to be undesirable (homeless people, people with HIV, drug users etc) and killings of people thought to support left-wing causes. Fighters were paid and most signed up for economic reasons.

It was indeed designated a terrorist organisation by the US and EU, although within Colombia they enjoyed significant support from mainstream politicians (the reverberations of which are still being felt) and there is widespread evidence of collusion with the Colombian security forces of the time.

Also, considering that a great deal of military and paramilitary training is about re-aligning moral judgements, it’s not clear how well you can generalise these results to terrorists in general.

It is probably unlikely that the moral reasoning of people who participated in this study is akin to, for example, the jihadi terrorists who have mounted semi-regular attacks in Europe over the last few years. Or alternatively, it is not clear how ‘acceptable harm’ moral reasoning applies across different contexts in different groups.

Even within Colombia you can see how the terrorist label is not a reliable classification of a particular group’s actions and culture. Los Urabeños are the biggest drug trafficking organisation in Colombia at the moment. They are essentially the Centauros Bloc of the AUC, who didn’t demobilise and just changed their name. They are involved in very similar activities.

Importantly, they are not classified as a terrorist organisation, despite being virtually same organisation from which members were recruited into this study.

I would guess these results are probably more directly relevant in understanding paramilitary criminal organisations, like the Sinaloa Cartel in Mexico, than more ideologically-oriented groups that claim political or religious motivations, although it would be fascinating if they did generalise.

So what this study provides is a massively useful step forward in understanding moral reasoning in this particular paramilitary group, and the extent to which this applies to other terrorist, paramilitary or criminal groups is an open question.
 

Link to open access study in Nature Human Behaviour.

An alternative beauty in parenthood

Vela has an amazing essay by a mother of a child with a rare chromosomal deletion. Put aside all your expectations about what this article will be like: it is about the hopes and reality of having a child, but it’s also about so much more.

It’s an insightful commentary on the social expectations foisted upon pregnant women.

It’s about the clash of folk understanding of wellness and the reality of genetic disorders.

It’s about being with your child as they develop in ways that are surprising and sometimes troubling and finding an alternative beauty in parenthood.
 

Link to Vela article SuperBabies Don’t Cry.

A neuroscientist podcaster explains…

There’s a great ongoing podcast series called A Neuroscientist Explains that looks at some of the most important points of contact between neuroscience and the wider world.

It’s a project of The Guardian Science Weekly podcast and is hosted by brain scientist Daniel Glaser who has an interesting profile – having been a cognitive neuroscientist for many years before moving into the world of art and public engagement.

Glaser takes inspiration from culture and current affairs – which often throws up discussion about the mind or brain – and then looks at these ideas in depth, typically with one of the leading researchers in the field.

Recent episodes on empathy and music have been particularly good (although skip the first episode in the series – unusually, there’s a few clangers in it) and they manage to strike a great balance between outlining the fundamentals while debating the latest ideas and findings.

It seems you can’t link solely to the podcast but you can pick them on the page linked below.
 

Link to ‘A Neuroscientist Explains’

Annette Karmiloff-Smith has left the building

The brilliant developmental neuropsychologist Annette Karmiloff-Smith has passed away and one of the brightest lights into the psychology of children’s development has been dimmed.

She actually started her professional life as a simultaneous interpreter for the UN and then went on to study psychology and trained with Jean Piaget.

Karmiloff-Smith went into neuropsychology and starting rethinking some of the assumptions of how cognition was organised in the brain which, until then, had almost entirely been based on studies of adults with brain injury.

These studies showed that some mental abilities could be independently impaired after brain damage suggesting that there was a degree of ‘modularity’ in the organisation of cognitive functions.

But Karmiloff-Smith investigated children with developmental disorders, like autism or William’s syndrome, and showed that what seemed to be the ‘natural’ organisation of the brain in adults was actually a result of development itself – an approach she called neuroconstructivism.

In other words, developmental disorders were not ‘knocking out’ specific abilities but affecting the dynamics of neurodevelopment as the child interacted with the world.

If you want to hear more of Karmiloff-Smith’s life and work, her interview on BBC Radio 4’s The Life Scientific is well worth a listen.
 

Link to page of remembrance for Annette Karmiloff-Smith.

Is psychosis an ‘immune disorder’?

A fascinating new study has just been published which found evidence for the immune system attacking a neuroreceptor in the brain in a small proportion of people with psychosis. It’s an interesting study that probably reflects what’s going to be a cultural tipping point for the idea of ‘immune system mental health problems’ or ‘madness as inflammation disorder’ but it’s worth being a little wary of the coming hype.

This new study, published in The Lancet Psychiatry, did blood tests on people who presented with their first episode of psychosis and looked for antibodies that attack specific receptors in the brain. Receptors are what receive neurotransmitters – the brain’s chemical signals – and allow information to be transferred around the nervous system, so disruption to these can cause brain disturbances.

The most scientifically interesting finding is that the research team found a type of antibody that attacks NMDA receptors in 7 patients (3%) out of 228, but zero controls.

The study found markers for other neuroreceptors that the immune system was attacking, but the reason the NMDA finding is so crucial is because it shows evidence of a condition called anti-NMDA receptor encephalitis which is known to cause episodes of psychosis that can be indistinguishable from ‘regular’ psychosis but for which the best treatment is dealing with the autoimmune problem.

It was only discovered in 2007 but there has been a long-running suspicion that it may be the best explanation for a small minority of cases of psychosis which can be easily misdiagnosed as schizophrenia.

Importantly, the findings from this research have been supported by another independent study that has just been published online. The two studies used different ranges for the concentration of NMDA antibodies they measured, but they came up with roughly the same figures.

It also chimes with a growing debate about the role of the immune system in mental health. A lot of this evidence is circumstantial but suggestive. For example, many of the genes associated (albeit weakly) with the diagnosis of schizophrenia are involved in the immune system – particularly in coding proteins for the major histocompatibility complex.

However, it’s worth being a little circumspect about this new enthusiasm for thinking of psychosis as an ‘immune disorder’.

Importantly, these new studies did blood tests, rather than checking cerebrospinal fluid – the fluid that your brain floats around in which lies on the other side of the blood-brain barrier, so we can’t be sure that these antibodies were actually affecting the brain in everyone found to have them. It’s likely, but not certain.

Also, we’re not sure to what extent anti-NMDA antibodies contribute to the chance of developing psychosis in everyone. Certainly there are some cases where it seems to be the main cause, but we’re not sure how that holds for all.

It’s also worth bearing in mind that the science over the role of the genes associated with the schizophrenia diagnosis in the immune system is certainly not settled. A recent large study compared the role of these genes in schizophrenia to known autoimmune disorders and concluded that the genes just don’t look like they’re actually impacting on the immune system.

There’s also a constant background of cultural enthusiasm in psychiatry to identify ‘biomarkers’ and anything that looks like a clear common biological pathway even for a small number of cases of ‘psychiatric’ problem gets a lot of airtime.

Curiously, in this case, Hollywood may also play a part.

A film called Brain On Fire has just been shown to film festivals and is being tested for a possible big release. It’s based on the (excellent) book of the same name by journalist Susannah Cahalan and describes her experience of developing psychosis only for it later to be discovered that she had anti-NMDA receptor encephalitis.

Hollywood has historically had a big effect on discussions about mental health and you can be sure that if the movie becomes a hit, popular media will be alive with discussions on ‘whether your mental health problems are really an immune problem’.

But taking a less glitzy view, in terms of these new studies, they probably reflect that a small percentage of people with psychosis, maybe 1-2%, have NMDA receptor-related immune problems that play an important role in the generation of their mental health difficulties.

It’s important not to underestimate the importance of these findings. It could potentially translate into more effective treatment for millions of people a year globally.

But in terms of psychosis as a whole, for which we know social adversity in its many forms plays a massive role, it’s just a small piece of the puzzle.
 

Link to locked Lancet Psychiatry study.

The hidden history of war on terror torture

The Hidden Persuaders project has interviewed neuropsychologist Tim Shallice about his opposition to the British government’s use of ‘enhanced interrogation’ in the Northern Ireland conflict of the 1970s – a practice eventually abandoned as torture.

Shallice is little known to the wider public but is one of the most important and influential neuropsychologists of his generation, having pioneered the systematic study of neurological problems as a window on typical cognitive function.

One of his first papers was not on brain injury, however, it was an article titled ‘Ulster depth interrogation techniques and their relation to sensory deprivation research’ where he set out a cognitive basis for why the ‘five techniques’ – wall-standing, hooding, white noise, sleep deprivation, and deprivation of food and drink – amounted to torture.

Shallice traces a link between the use of these techniques and research on sensory deprivation – which was investigated both by regular scientists for reasons of scientific curiosity, and as we learned later, by intelligence services while trying to understand ‘brain washing’.

The use of these techniques in Northern Ireland was subject to an official investigation and Shallice and other researchers testified to the Parker Committee which led Prime Minister Edward Heath to ban the practice.

If those techniques sound eerily familiar, it is because they formed the basis of interrogation practices at Guantanamo Bay and other notorious sites in the ‘war on terror’.

The Hidden Persuaders is a research project at Birkbeck, University of London, which is investigating the history of ‘brainwashing’. It traces the practice to its use by the British during the colonisation of Yemen, who seemed to have borrowed it off the KGB.

And if you want to read about the modern day effects of the abusive techniques, The New York Times has just published a disturbing feature article about the long-term consequences of being tortured in Guantanamo and other ‘black sites’ by following up many of the people subject to the brutal techniques.
 

Link to Hidden Persuaders interview with Tim Shallice.
Link to NYT on long-term legacy of war on terror torture.

Hallucinating sleep researchers

I just stumbled across a fascinating 2002 paper where pioneering sleep researcher Allan Hobson describes the effect of a precisely located stroke he suffered. It affected the medulla in his brain stem, important for regulating sleep, and caused total insomnia and a suppression of dreaming.

In one fascinating section, Hobson describes the hallucinations he experienced, likely due to his inability to sleep or dream, which included disconnected body parts and a hallucinated Robert Stickgold – another well known sleep researcher.

Between Days 1 and 10 I could visually perceive a vault over my supine body immediately upon closing my eyes. The vault resembled the bottom of a swimming pool but the gunitelike surface of the vault could be not only aqua, but also white or beige and, more rarely, engraved obsidian or of a gauzelike nature mixed with ice or glass crystals.

There were three categories of formed imagery that appeared on these surfaces. In the first category of geologic forms the imagery tended to be protomorphic and crude but often gave way to the more elaborate structures of category two inanimate sculptural forms.

The most amusing of these (which occurred on the fourth night) were enormous lucite telephone/computers. But there were also tables and tableaux in which the geologic forms sometimes took unusual and bizarre shapes. One that I recall is a TV-set-like representation of a tropical landscape.

In category three, the most elaborate forms have human anatomical elements, including long swirling flesh, columns that metamorphosed into sphincters, nipples, and crotches, but these were never placed in real bodies.

In fact whole body forms almost never emerged. Instead I saw profiles of faces and profiles of bodies which were often inextricably mixed with penises, noses, lips, eyebrows; torsos arose out of the sculptural columns of flesh and sank back into them again.

The most fully realized human images include my wife, featuring her lower anatomy and (most amusingly) a Peter Pan-like Robert Stickgold and two fairies enjoying a bedtime story. While visual disturbances are quire common in Wallenberg’s syndrome, they have only been reported to occur in waking with eyes open.

Blurring of vision (which I had), and the tendency of objects to appear to move called oscillopsia (which I did not have), are attributed to the disturbed oculomotor and vestibular physiology.

 

Link to locked report of Hobson’s stroke.

Making the personal, geospatial

CC licensed photo by Flickr user Paul Townsend. Click for origin.There is an old story in London, and it goes like this. Following extensive rioting, there is an impassioned debate about the state of society with some saying it shows moral decay while others claim it demonstrates the desperation of poverty.

In 1886, London hosted one of its regular retellings when thousands of unemployed people trashed London’s West End during two days of violent disturbances.

In the weeks of consternation that followed, the press stumbled on the work of wealthy ship owner Charles Booth who had begun an unprecedented project – mapping poverty across the entire city.

He started the project because he thought Henry Hyndman was bullshitting.

Hyndman, a rather too earnest social campaigner, claimed that 1 in 4 Londoners lived in poverty, a figure Booth scoffed at as a gross exaggeration.

So Booth paid for an impressive team of researchers and sent to them out to interview officials who assessed families for compulsory schooling and he created a map, initially of the East End, and eventually as far west as Hammersmith, of every house and the social state of the families within it.

Each dwelling was classified into seven gradations – from “Wealthy; upper middle and upper classes” to “Lowest class; vicious, semi-criminal”. For the first time, deprivation could be seen etched into London’s social landscape.

I suspect that the term ‘vicious’ referred to its older meaning: ‘of given to vice’- rather than cruel. But what Booth created, for the first time and in exceptional detail, was a map of social environments.

The map is amazingly detailed. Literally, a house by house mapping of the whole of London.

The results showed that Hyndman was indeed wrong, but not in the direction Booth assumed. He found 1 in 3 Londoners lived below the poverty line.

If you know a bit about the capital today, you can see how many of the deprived areas from 1886 are still some of the most deprived in 2016.

So I was fascinated when I read about a new study that allows poverty to be mapped from the air, using machine learning to analyse satellite images Nigeria, Tanzania, Uganda, Malawi, and Rwanda.

But rather than pre-defining what counts as an image of a wealthy area (swimming pools perhaps?) compared to an impoverished one (unpaved roads maybe), they trained a neural network learn its own associations between image properties and income on an initial set of training data before trying it out on new data sets.

The neural network could explain up to 75% of the variation in the local economy.

Knowing both the extent and geography of poverty is massively important. It allows a macro view of something that manifests in very local ways, mapping it to street corners, housing blocks and small settlements.

It makes the vast forces of the economy, personal.
 

Link to Booth’s poverty map.
Link to Science reporting of satellite mapping study.

The science of urban paranoia

CC Licensed Image by Flickr user 01steven. Click for source.I’ve got an article in The Atlantic on how paranoia and psychosis are more common in cities and why the quest to explain the ‘urban psychosis effect’ is reshaping psychiatry.

The more urban your neighbourhood, the higher the rate of diagnosed schizophrenia and you are more likely to experience what are broadly known as ‘non-affective psychosis-psychoses’ – that is, delusions, hallucinations, and paranoia not primarily caused by mood problems.

This has led to a long and ongoing debate about why this is, with some arguing that it is an effect of city-living on the mind, while others arguing the association is better explained by a complex interaction between genetic risk factors and limited life chances.

The article discusses the science behind exactly this debate, partly a judgement on the value of the city itself, and notes how it’s pushing psychiatry to re-examine how it deals with what is often euphemistically called ‘the environment’.
 

Link to ‘The Mystery of Urban Psychosis’ in The Atlantic.

A podcast on drugs

If you’re a podcast addict, you could no worse than checking out Say Why to Drugs an excellent new show that covers the science behind a different recreational drug each week.

The podcast is with psychologist and drugs researcher Suzi Gage and rhyme-smith Scroobius Pip, better known for his banging tunes.

They make for a great partnership and they breakdown everything from the psychopharmacology of MDMA to the social impact of ketamine and do plenty of myth-busting along the way.

Thoroughly listenable, good fun and great on the science, you can find it on acast and iTunes.
 

Link to podcast on ITunes
Link to podcast on acast

Spike activity 24-06-2016

Quick links from the past week in mind and brain news:

Why do some children thrive in adult life despite a background of violence and neglect? Fascinating piece from Mosaic.

Scientific American asks with the flood of neuroscience PhDs, where will all the neuroscientists go? Ask British neuroscientists, they’re probably weighing up their options right now.

Blobs and Pitfalls: Challenges for fMRI Research. Neuroskeptic covers one of a number of ‘rethinking fMRI research pieces’ that has recently come out.

Neurocritic casts a skeptical over several new oxytocin papers that have appeared.

Was Dr. Asperger A Nazi? The Question Still Haunts Autism. A complex question tackled over at NPR.

Psychodiagnosticator asks What do we talk about when we talk about schizophrenia?

There’s a fascinating discussion on language and the culture of internal meaning over at The Psychologist.

Invisibilia, NPR’s people and cognitive science show, has just kicked off a new series.

Sleight of mind in fMRI

I’ve written a piece for the BPS Research Digest about a fascinating study that caused people to feel their thoughts were being controlled by outside forces.

It’s a psychologically intriguing study because it used the psychology lab to conduct the study but it also used the psychology lab as a form of misdirection, so participants wouldn’t realise that the effect of having their ‘thoughts read’ and ‘thoughts inserted into their mind’ was in fact a common trick used in stage mentalism.

The interesting bit came where the researchers recorded whether participants reacted differently when they thought their thoughts were being read (they did) and asked about their experience of it happening (when it never actually did).

They reported a range of anomalous effects when they thought numbers were being “inserted” into their minds: A number “popped in” my head, reported one participant. Others described “a voice … dragging me from the number that already exists in my mind”, feeling “some kind of force”, feeling “drawn” to a number, or the sensation of their brain getting “stuck” on one number. All a striking testament to the power of suggestion.

A really wonderfully conceived study that may provide a useful tool for temporarily inducing the feeling of not controlling your own thoughts – something that occurs in a range of psychological difficulties and disorders.
 

Link to piece on BPS Research Digest.

Cultures of mental distress

BBC Radio 4 is currently running a fascinating four-part series called The Borders of Sanity on the interaction between culture and mental illness.

It’s been put together by cultural historian Christopher Harding and takes an in-depth look at four particular instances where culture and mental health interact, perhaps in seemingly curious ways if you weren’t familiar with the culture.

It includes episodes on Depression in Japan, Sweden’s Adolescents, Hearing Voices in the UK, and the one to be broadcast next week Healing in Ghana.

The only downside is it’s one of BBC Radio’s occasional programmes that they only make available as streamed audio from their website – presumably to give it an early 2000s internet feel.

However, well-worth a listen. Genuinely fascinating stuff so far.
 

Link to BBC Radio 4’s The Borders of Sanity.